00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2466 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3727 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.034 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.035 The recommended git tool is: git 00:00:00.035 using credential 00000000-0000-0000-0000-000000000002 00:00:00.038 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.052 Fetching changes from the remote Git repository 00:00:00.057 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.082 Using shallow fetch with depth 1 00:00:00.083 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.083 > git --version # timeout=10 00:00:00.121 > git --version # 'git version 2.39.2' 00:00:00.121 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.157 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.157 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.418 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.427 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.438 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.438 > git config core.sparsecheckout # timeout=10 00:00:03.447 > git read-tree -mu HEAD # timeout=10 00:00:03.461 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.479 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.480 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.588 [Pipeline] Start of Pipeline 00:00:03.599 [Pipeline] library 00:00:03.601 Loading library shm_lib@master 00:00:03.601 Library shm_lib@master is cached. Copying from home. 00:00:03.618 [Pipeline] node 00:00:03.635 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:03.637 [Pipeline] { 00:00:03.647 [Pipeline] catchError 00:00:03.649 [Pipeline] { 00:00:03.663 [Pipeline] wrap 00:00:03.671 [Pipeline] { 00:00:03.679 [Pipeline] stage 00:00:03.681 [Pipeline] { (Prologue) 00:00:03.891 [Pipeline] sh 00:00:04.179 + logger -p user.info -t JENKINS-CI 00:00:04.196 [Pipeline] echo 00:00:04.198 Node: WFP20 00:00:04.205 [Pipeline] sh 00:00:04.504 [Pipeline] setCustomBuildProperty 00:00:04.512 [Pipeline] echo 00:00:04.514 Cleanup processes 00:00:04.518 [Pipeline] sh 00:00:04.802 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.802 1188215 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.813 [Pipeline] sh 00:00:05.095 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.095 ++ grep -v 'sudo pgrep' 00:00:05.095 ++ awk '{print $1}' 00:00:05.095 + sudo kill -9 00:00:05.095 + true 00:00:05.107 [Pipeline] cleanWs 00:00:05.114 [WS-CLEANUP] Deleting project workspace... 00:00:05.114 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.120 [WS-CLEANUP] done 00:00:05.123 [Pipeline] setCustomBuildProperty 00:00:05.133 [Pipeline] sh 00:00:05.415 + sudo git config --global --replace-all safe.directory '*' 00:00:05.492 [Pipeline] httpRequest 00:00:06.089 [Pipeline] echo 00:00:06.090 Sorcerer 10.211.164.20 is alive 00:00:06.100 [Pipeline] retry 00:00:06.103 [Pipeline] { 00:00:06.115 [Pipeline] httpRequest 00:00:06.118 HttpMethod: GET 00:00:06.118 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.119 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.127 Response Code: HTTP/1.1 200 OK 00:00:06.127 Success: Status code 200 is in the accepted range: 200,404 00:00:06.128 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.963 [Pipeline] } 00:00:06.979 [Pipeline] // retry 00:00:06.985 [Pipeline] sh 00:00:07.329 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.342 [Pipeline] httpRequest 00:00:07.774 [Pipeline] echo 00:00:07.775 Sorcerer 10.211.164.20 is alive 00:00:07.780 [Pipeline] retry 00:00:07.781 [Pipeline] { 00:00:07.789 [Pipeline] httpRequest 00:00:07.792 HttpMethod: GET 00:00:07.792 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:07.792 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:07.815 Response Code: HTTP/1.1 200 OK 00:00:07.815 Success: Status code 200 is in the accepted range: 200,404 00:00:07.815 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:47.083 [Pipeline] } 00:00:47.102 [Pipeline] // retry 00:00:47.110 [Pipeline] sh 00:00:47.397 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:49.947 [Pipeline] sh 00:00:50.271 + git -C spdk log --oneline -n5 00:00:50.271 c13c99a5e test: Various fixes for Fedora40 00:00:50.271 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:50.271 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:50.271 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:50.271 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:50.281 [Pipeline] } 00:00:50.295 [Pipeline] // stage 00:00:50.303 [Pipeline] stage 00:00:50.306 [Pipeline] { (Prepare) 00:00:50.321 [Pipeline] writeFile 00:00:50.336 [Pipeline] sh 00:00:50.619 + logger -p user.info -t JENKINS-CI 00:00:50.631 [Pipeline] sh 00:00:50.914 + logger -p user.info -t JENKINS-CI 00:00:50.927 [Pipeline] sh 00:00:51.210 + cat autorun-spdk.conf 00:00:51.210 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.210 SPDK_TEST_FUZZER_SHORT=1 00:00:51.210 SPDK_TEST_FUZZER=1 00:00:51.210 SPDK_RUN_UBSAN=1 00:00:51.216 RUN_NIGHTLY=1 00:00:51.220 [Pipeline] readFile 00:00:51.243 [Pipeline] withEnv 00:00:51.245 [Pipeline] { 00:00:51.257 [Pipeline] sh 00:00:51.541 + set -ex 00:00:51.541 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:51.541 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:51.541 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.541 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:51.541 ++ SPDK_TEST_FUZZER=1 00:00:51.541 ++ SPDK_RUN_UBSAN=1 00:00:51.541 ++ RUN_NIGHTLY=1 00:00:51.541 + case $SPDK_TEST_NVMF_NICS in 00:00:51.541 + DRIVERS= 00:00:51.541 + [[ -n '' ]] 00:00:51.541 + exit 0 00:00:51.550 [Pipeline] } 00:00:51.565 [Pipeline] // withEnv 00:00:51.570 [Pipeline] } 00:00:51.585 [Pipeline] // stage 00:00:51.594 [Pipeline] catchError 00:00:51.595 [Pipeline] { 00:00:51.609 [Pipeline] timeout 00:00:51.609 Timeout set to expire in 30 min 00:00:51.611 [Pipeline] { 00:00:51.625 [Pipeline] stage 00:00:51.626 [Pipeline] { (Tests) 00:00:51.640 [Pipeline] sh 00:00:51.924 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:51.924 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:51.924 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:51.924 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:51.924 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:51.924 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:51.924 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:51.924 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:51.924 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:51.924 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:51.924 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:51.924 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:51.924 + source /etc/os-release 00:00:51.924 ++ NAME='Fedora Linux' 00:00:51.925 ++ VERSION='39 (Cloud Edition)' 00:00:51.925 ++ ID=fedora 00:00:51.925 ++ VERSION_ID=39 00:00:51.925 ++ VERSION_CODENAME= 00:00:51.925 ++ PLATFORM_ID=platform:f39 00:00:51.925 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:51.925 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:51.925 ++ LOGO=fedora-logo-icon 00:00:51.925 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:51.925 ++ HOME_URL=https://fedoraproject.org/ 00:00:51.925 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:51.925 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:51.925 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:51.925 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:51.925 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:51.925 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:51.925 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:51.925 ++ SUPPORT_END=2024-11-12 00:00:51.925 ++ VARIANT='Cloud Edition' 00:00:51.925 ++ VARIANT_ID=cloud 00:00:51.925 + uname -a 00:00:51.925 Linux spdk-wfp-20 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:51.925 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:55.215 Hugepages 00:00:55.215 node hugesize free / total 00:00:55.215 node0 1048576kB 0 / 0 00:00:55.215 node0 2048kB 0 / 0 00:00:55.215 node1 1048576kB 0 / 0 00:00:55.215 node1 2048kB 0 / 0 00:00:55.215 00:00:55.215 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:55.215 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:55.215 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:55.215 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:55.215 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:55.215 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:55.215 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:55.215 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:55.215 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:55.215 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:55.215 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:55.215 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:55.215 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:55.215 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:55.215 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:55.215 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:55.215 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:55.215 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:55.215 + rm -f /tmp/spdk-ld-path 00:00:55.215 + source autorun-spdk.conf 00:00:55.215 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.215 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:55.215 ++ SPDK_TEST_FUZZER=1 00:00:55.215 ++ SPDK_RUN_UBSAN=1 00:00:55.215 ++ RUN_NIGHTLY=1 00:00:55.215 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:55.215 + [[ -n '' ]] 00:00:55.215 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:55.215 + for M in /var/spdk/build-*-manifest.txt 00:00:55.215 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:55.215 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:55.215 + for M in /var/spdk/build-*-manifest.txt 00:00:55.215 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:55.215 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:55.215 + for M in /var/spdk/build-*-manifest.txt 00:00:55.215 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:55.215 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:55.215 ++ uname 00:00:55.215 + [[ Linux == \L\i\n\u\x ]] 00:00:55.215 + sudo dmesg -T 00:00:55.215 + sudo dmesg --clear 00:00:55.215 + dmesg_pid=1189106 00:00:55.215 + [[ Fedora Linux == FreeBSD ]] 00:00:55.215 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:55.215 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:55.215 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:55.215 + [[ -x /usr/src/fio-static/fio ]] 00:00:55.215 + export FIO_BIN=/usr/src/fio-static/fio 00:00:55.215 + FIO_BIN=/usr/src/fio-static/fio 00:00:55.215 + sudo dmesg -Tw 00:00:55.215 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:55.215 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:55.215 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:55.215 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:55.215 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:55.215 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:55.215 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:55.215 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:55.215 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:55.215 Test configuration: 00:00:55.215 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.215 SPDK_TEST_FUZZER_SHORT=1 00:00:55.215 SPDK_TEST_FUZZER=1 00:00:55.215 SPDK_RUN_UBSAN=1 00:00:55.215 RUN_NIGHTLY=1 10:38:43 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:00:55.215 10:38:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:55.215 10:38:43 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:55.215 10:38:43 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:55.215 10:38:43 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:55.215 10:38:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.215 10:38:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.215 10:38:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.215 10:38:43 -- paths/export.sh@5 -- $ export PATH 00:00:55.215 10:38:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.215 10:38:43 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:55.215 10:38:43 -- common/autobuild_common.sh@440 -- $ date +%s 00:00:55.215 10:38:43 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734255523.XXXXXX 00:00:55.215 10:38:43 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734255523.aPMeOb 00:00:55.215 10:38:43 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:00:55.215 10:38:43 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:00:55.215 10:38:43 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:55.215 10:38:43 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:55.215 10:38:43 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:55.215 10:38:43 -- common/autobuild_common.sh@456 -- $ get_config_params 00:00:55.215 10:38:43 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:00:55.215 10:38:43 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.216 10:38:44 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:55.216 10:38:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:55.216 10:38:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:55.216 10:38:44 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:55.216 10:38:44 -- spdk/autobuild.sh@16 -- $ date -u 00:00:55.216 Sun Dec 15 09:38:44 AM UTC 2024 00:00:55.216 10:38:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:55.216 LTS-67-gc13c99a5e 00:00:55.216 10:38:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:55.216 10:38:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:55.216 10:38:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:55.216 10:38:44 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:55.216 10:38:44 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:55.216 10:38:44 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.216 ************************************ 00:00:55.216 START TEST ubsan 00:00:55.216 ************************************ 00:00:55.216 10:38:44 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:00:55.216 using ubsan 00:00:55.216 00:00:55.216 real 0m0.000s 00:00:55.216 user 0m0.000s 00:00:55.216 sys 0m0.000s 00:00:55.216 10:38:44 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:00:55.216 10:38:44 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.216 ************************************ 00:00:55.216 END TEST ubsan 00:00:55.216 ************************************ 00:00:55.216 10:38:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:55.216 10:38:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:55.216 10:38:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:55.216 10:38:44 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:55.216 10:38:44 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:55.216 10:38:44 -- common/autobuild_common.sh@428 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:55.216 10:38:44 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:00:55.216 10:38:44 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:55.216 10:38:44 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.216 ************************************ 00:00:55.216 START TEST autobuild_llvm_precompile 00:00:55.216 ************************************ 00:00:55.216 10:38:44 -- common/autotest_common.sh@1114 -- $ _llvm_precompile 00:00:55.216 10:38:44 -- common/autobuild_common.sh@32 -- $ clang --version 00:00:55.216 10:38:44 -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:00:55.216 Target: x86_64-redhat-linux-gnu 00:00:55.216 Thread model: posix 00:00:55.216 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:55.216 10:38:44 -- common/autobuild_common.sh@33 -- $ clang_num=17 00:00:55.216 10:38:44 -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:00:55.216 10:38:44 -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:00:55.216 10:38:44 -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:00:55.216 10:38:44 -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:00:55.216 10:38:44 -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:55.216 10:38:44 -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:00:55.216 10:38:44 -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:00:55.216 10:38:44 -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:00:55.216 10:38:44 -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:00:55.475 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:55.475 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:55.734 Using 'verbs' RDMA provider 00:01:11.555 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:23.749 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:23.749 Creating mk/config.mk...done. 00:01:23.749 Creating mk/cc.flags.mk...done. 00:01:23.749 Type 'make' to build. 00:01:23.749 00:01:23.749 real 0m27.742s 00:01:23.749 user 0m12.120s 00:01:23.749 sys 0m14.997s 00:01:23.749 10:39:11 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:23.749 10:39:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.749 ************************************ 00:01:23.749 END TEST autobuild_llvm_precompile 00:01:23.749 ************************************ 00:01:23.749 10:39:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:23.749 10:39:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:23.749 10:39:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:23.749 10:39:11 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:23.749 10:39:11 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:23.749 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:23.749 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:23.749 Using 'verbs' RDMA provider 00:01:36.515 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:48.775 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:48.775 Creating mk/config.mk...done. 00:01:48.775 Creating mk/cc.flags.mk...done. 00:01:48.775 Type 'make' to build. 00:01:48.775 10:39:36 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:48.775 10:39:36 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:48.775 10:39:36 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:48.775 10:39:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.775 ************************************ 00:01:48.775 START TEST make 00:01:48.775 ************************************ 00:01:48.775 10:39:36 -- common/autotest_common.sh@1114 -- $ make -j112 00:01:48.775 make[1]: Nothing to be done for 'all'. 00:01:49.341 The Meson build system 00:01:49.341 Version: 1.5.0 00:01:49.341 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:49.341 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:49.341 Build type: native build 00:01:49.341 Project name: libvfio-user 00:01:49.341 Project version: 0.0.1 00:01:49.341 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:01:49.341 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:01:49.341 Host machine cpu family: x86_64 00:01:49.341 Host machine cpu: x86_64 00:01:49.341 Run-time dependency threads found: YES 00:01:49.341 Library dl found: YES 00:01:49.341 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:49.341 Run-time dependency json-c found: YES 0.17 00:01:49.341 Run-time dependency cmocka found: YES 1.1.7 00:01:49.341 Program pytest-3 found: NO 00:01:49.341 Program flake8 found: NO 00:01:49.341 Program misspell-fixer found: NO 00:01:49.341 Program restructuredtext-lint found: NO 00:01:49.341 Program valgrind found: YES (/usr/bin/valgrind) 00:01:49.341 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:49.341 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:49.341 Compiler for C supports arguments -Wwrite-strings: YES 00:01:49.341 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:49.341 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:49.341 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:49.341 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:49.341 Build targets in project: 8 00:01:49.341 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:49.341 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:49.341 00:01:49.341 libvfio-user 0.0.1 00:01:49.341 00:01:49.341 User defined options 00:01:49.341 buildtype : debug 00:01:49.341 default_library: static 00:01:49.341 libdir : /usr/local/lib 00:01:49.341 00:01:49.341 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:49.906 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:49.906 [1/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:49.906 [2/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:49.906 [3/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:49.906 [4/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:49.906 [5/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:49.906 [6/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:49.906 [7/36] Compiling C object samples/null.p/null.c.o 00:01:49.906 [8/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:49.906 [9/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:49.906 [10/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:49.906 [11/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:49.906 [12/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:49.906 [13/36] Compiling C object samples/server.p/server.c.o 00:01:49.906 [14/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:49.906 [15/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:49.906 [16/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:49.906 [17/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:49.906 [18/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:49.906 [19/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:49.906 [20/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:49.906 [21/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:49.906 [22/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:49.906 [23/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:49.906 [24/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:49.906 [25/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:49.906 [26/36] Compiling C object samples/client.p/client.c.o 00:01:49.906 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:49.906 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:49.906 [29/36] Linking static target lib/libvfio-user.a 00:01:49.906 [30/36] Linking target samples/client 00:01:49.906 [31/36] Linking target samples/shadow_ioeventfd_server 00:01:49.906 [32/36] Linking target samples/null 00:01:49.906 [33/36] Linking target test/unit_tests 00:01:49.906 [34/36] Linking target samples/lspci 00:01:49.906 [35/36] Linking target samples/gpio-pci-idio-16 00:01:49.906 [36/36] Linking target samples/server 00:01:49.906 INFO: autodetecting backend as ninja 00:01:49.906 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:49.906 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:50.471 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:50.471 ninja: no work to do. 00:01:55.824 The Meson build system 00:01:55.824 Version: 1.5.0 00:01:55.824 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:55.824 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:55.824 Build type: native build 00:01:55.824 Program cat found: YES (/usr/bin/cat) 00:01:55.824 Project name: DPDK 00:01:55.824 Project version: 23.11.0 00:01:55.824 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:01:55.824 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:01:55.824 Host machine cpu family: x86_64 00:01:55.824 Host machine cpu: x86_64 00:01:55.824 Message: ## Building in Developer Mode ## 00:01:55.824 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:55.824 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:55.824 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:55.824 Program python3 found: YES (/usr/bin/python3) 00:01:55.824 Program cat found: YES (/usr/bin/cat) 00:01:55.824 Compiler for C supports arguments -march=native: YES 00:01:55.824 Checking for size of "void *" : 8 00:01:55.824 Checking for size of "void *" : 8 (cached) 00:01:55.824 Library m found: YES 00:01:55.824 Library numa found: YES 00:01:55.824 Has header "numaif.h" : YES 00:01:55.824 Library fdt found: NO 00:01:55.824 Library execinfo found: NO 00:01:55.824 Has header "execinfo.h" : YES 00:01:55.824 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:55.825 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:55.825 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:55.825 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:55.825 Run-time dependency openssl found: YES 3.1.1 00:01:55.825 Run-time dependency libpcap found: YES 1.10.4 00:01:55.825 Has header "pcap.h" with dependency libpcap: YES 00:01:55.825 Compiler for C supports arguments -Wcast-qual: YES 00:01:55.825 Compiler for C supports arguments -Wdeprecated: YES 00:01:55.825 Compiler for C supports arguments -Wformat: YES 00:01:55.825 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:55.825 Compiler for C supports arguments -Wformat-security: YES 00:01:55.825 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:55.825 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:55.825 Compiler for C supports arguments -Wnested-externs: YES 00:01:55.825 Compiler for C supports arguments -Wold-style-definition: YES 00:01:55.825 Compiler for C supports arguments -Wpointer-arith: YES 00:01:55.825 Compiler for C supports arguments -Wsign-compare: YES 00:01:55.825 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:55.825 Compiler for C supports arguments -Wundef: YES 00:01:55.825 Compiler for C supports arguments -Wwrite-strings: YES 00:01:55.825 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:55.825 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:55.825 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:55.825 Program objdump found: YES (/usr/bin/objdump) 00:01:55.825 Compiler for C supports arguments -mavx512f: YES 00:01:55.825 Checking if "AVX512 checking" compiles: YES 00:01:55.825 Fetching value of define "__SSE4_2__" : 1 00:01:55.825 Fetching value of define "__AES__" : 1 00:01:55.825 Fetching value of define "__AVX__" : 1 00:01:55.825 Fetching value of define "__AVX2__" : 1 00:01:55.825 Fetching value of define "__AVX512BW__" : 1 00:01:55.825 Fetching value of define "__AVX512CD__" : 1 00:01:55.825 Fetching value of define "__AVX512DQ__" : 1 00:01:55.825 Fetching value of define "__AVX512F__" : 1 00:01:55.825 Fetching value of define "__AVX512VL__" : 1 00:01:55.825 Fetching value of define "__PCLMUL__" : 1 00:01:55.825 Fetching value of define "__RDRND__" : 1 00:01:55.825 Fetching value of define "__RDSEED__" : 1 00:01:55.825 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:55.825 Fetching value of define "__znver1__" : (undefined) 00:01:55.825 Fetching value of define "__znver2__" : (undefined) 00:01:55.825 Fetching value of define "__znver3__" : (undefined) 00:01:55.825 Fetching value of define "__znver4__" : (undefined) 00:01:55.825 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:55.825 Message: lib/log: Defining dependency "log" 00:01:55.825 Message: lib/kvargs: Defining dependency "kvargs" 00:01:55.825 Message: lib/telemetry: Defining dependency "telemetry" 00:01:55.825 Checking for function "getentropy" : NO 00:01:55.825 Message: lib/eal: Defining dependency "eal" 00:01:55.825 Message: lib/ring: Defining dependency "ring" 00:01:55.825 Message: lib/rcu: Defining dependency "rcu" 00:01:55.825 Message: lib/mempool: Defining dependency "mempool" 00:01:55.825 Message: lib/mbuf: Defining dependency "mbuf" 00:01:55.825 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:55.825 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:55.825 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:55.825 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:55.825 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:55.825 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:55.825 Compiler for C supports arguments -mpclmul: YES 00:01:55.825 Compiler for C supports arguments -maes: YES 00:01:55.825 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:55.825 Compiler for C supports arguments -mavx512bw: YES 00:01:55.825 Compiler for C supports arguments -mavx512dq: YES 00:01:55.825 Compiler for C supports arguments -mavx512vl: YES 00:01:55.825 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:55.825 Compiler for C supports arguments -mavx2: YES 00:01:55.825 Compiler for C supports arguments -mavx: YES 00:01:55.825 Message: lib/net: Defining dependency "net" 00:01:55.825 Message: lib/meter: Defining dependency "meter" 00:01:55.825 Message: lib/ethdev: Defining dependency "ethdev" 00:01:55.825 Message: lib/pci: Defining dependency "pci" 00:01:55.825 Message: lib/cmdline: Defining dependency "cmdline" 00:01:55.825 Message: lib/hash: Defining dependency "hash" 00:01:55.825 Message: lib/timer: Defining dependency "timer" 00:01:55.825 Message: lib/compressdev: Defining dependency "compressdev" 00:01:55.825 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:55.825 Message: lib/dmadev: Defining dependency "dmadev" 00:01:55.825 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:55.825 Message: lib/power: Defining dependency "power" 00:01:55.825 Message: lib/reorder: Defining dependency "reorder" 00:01:55.825 Message: lib/security: Defining dependency "security" 00:01:55.825 Has header "linux/userfaultfd.h" : YES 00:01:55.825 Has header "linux/vduse.h" : YES 00:01:55.825 Message: lib/vhost: Defining dependency "vhost" 00:01:55.825 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:55.825 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:55.825 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:55.825 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:55.825 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:55.825 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:55.825 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:55.825 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:55.825 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:55.825 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:55.825 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:55.825 Configuring doxy-api-html.conf using configuration 00:01:55.825 Configuring doxy-api-man.conf using configuration 00:01:55.825 Program mandb found: YES (/usr/bin/mandb) 00:01:55.825 Program sphinx-build found: NO 00:01:55.825 Configuring rte_build_config.h using configuration 00:01:55.825 Message: 00:01:55.825 ================= 00:01:55.825 Applications Enabled 00:01:55.825 ================= 00:01:55.825 00:01:55.825 apps: 00:01:55.825 00:01:55.825 00:01:55.825 Message: 00:01:55.825 ================= 00:01:55.825 Libraries Enabled 00:01:55.825 ================= 00:01:55.825 00:01:55.825 libs: 00:01:55.825 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:55.825 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:55.825 cryptodev, dmadev, power, reorder, security, vhost, 00:01:55.825 00:01:55.825 Message: 00:01:55.825 =============== 00:01:55.825 Drivers Enabled 00:01:55.825 =============== 00:01:55.825 00:01:55.825 common: 00:01:55.825 00:01:55.825 bus: 00:01:55.825 pci, vdev, 00:01:55.825 mempool: 00:01:55.825 ring, 00:01:55.825 dma: 00:01:55.825 00:01:55.825 net: 00:01:55.825 00:01:55.825 crypto: 00:01:55.825 00:01:55.825 compress: 00:01:55.825 00:01:55.825 vdpa: 00:01:55.825 00:01:55.825 00:01:55.825 Message: 00:01:55.825 ================= 00:01:55.825 Content Skipped 00:01:55.825 ================= 00:01:55.825 00:01:55.825 apps: 00:01:55.825 dumpcap: explicitly disabled via build config 00:01:55.825 graph: explicitly disabled via build config 00:01:55.825 pdump: explicitly disabled via build config 00:01:55.825 proc-info: explicitly disabled via build config 00:01:55.825 test-acl: explicitly disabled via build config 00:01:55.825 test-bbdev: explicitly disabled via build config 00:01:55.825 test-cmdline: explicitly disabled via build config 00:01:55.825 test-compress-perf: explicitly disabled via build config 00:01:55.825 test-crypto-perf: explicitly disabled via build config 00:01:55.825 test-dma-perf: explicitly disabled via build config 00:01:55.825 test-eventdev: explicitly disabled via build config 00:01:55.825 test-fib: explicitly disabled via build config 00:01:55.825 test-flow-perf: explicitly disabled via build config 00:01:55.825 test-gpudev: explicitly disabled via build config 00:01:55.825 test-mldev: explicitly disabled via build config 00:01:55.825 test-pipeline: explicitly disabled via build config 00:01:55.825 test-pmd: explicitly disabled via build config 00:01:55.825 test-regex: explicitly disabled via build config 00:01:55.825 test-sad: explicitly disabled via build config 00:01:55.825 test-security-perf: explicitly disabled via build config 00:01:55.825 00:01:55.825 libs: 00:01:55.825 metrics: explicitly disabled via build config 00:01:55.825 acl: explicitly disabled via build config 00:01:55.825 bbdev: explicitly disabled via build config 00:01:55.825 bitratestats: explicitly disabled via build config 00:01:55.825 bpf: explicitly disabled via build config 00:01:55.825 cfgfile: explicitly disabled via build config 00:01:55.825 distributor: explicitly disabled via build config 00:01:55.825 efd: explicitly disabled via build config 00:01:55.825 eventdev: explicitly disabled via build config 00:01:55.825 dispatcher: explicitly disabled via build config 00:01:55.825 gpudev: explicitly disabled via build config 00:01:55.825 gro: explicitly disabled via build config 00:01:55.825 gso: explicitly disabled via build config 00:01:55.825 ip_frag: explicitly disabled via build config 00:01:55.825 jobstats: explicitly disabled via build config 00:01:55.825 latencystats: explicitly disabled via build config 00:01:55.825 lpm: explicitly disabled via build config 00:01:55.825 member: explicitly disabled via build config 00:01:55.825 pcapng: explicitly disabled via build config 00:01:55.825 rawdev: explicitly disabled via build config 00:01:55.825 regexdev: explicitly disabled via build config 00:01:55.825 mldev: explicitly disabled via build config 00:01:55.825 rib: explicitly disabled via build config 00:01:55.825 sched: explicitly disabled via build config 00:01:55.825 stack: explicitly disabled via build config 00:01:55.825 ipsec: explicitly disabled via build config 00:01:55.825 pdcp: explicitly disabled via build config 00:01:55.825 fib: explicitly disabled via build config 00:01:55.826 port: explicitly disabled via build config 00:01:55.826 pdump: explicitly disabled via build config 00:01:55.826 table: explicitly disabled via build config 00:01:55.826 pipeline: explicitly disabled via build config 00:01:55.826 graph: explicitly disabled via build config 00:01:55.826 node: explicitly disabled via build config 00:01:55.826 00:01:55.826 drivers: 00:01:55.826 common/cpt: not in enabled drivers build config 00:01:55.826 common/dpaax: not in enabled drivers build config 00:01:55.826 common/iavf: not in enabled drivers build config 00:01:55.826 common/idpf: not in enabled drivers build config 00:01:55.826 common/mvep: not in enabled drivers build config 00:01:55.826 common/octeontx: not in enabled drivers build config 00:01:55.826 bus/auxiliary: not in enabled drivers build config 00:01:55.826 bus/cdx: not in enabled drivers build config 00:01:55.826 bus/dpaa: not in enabled drivers build config 00:01:55.826 bus/fslmc: not in enabled drivers build config 00:01:55.826 bus/ifpga: not in enabled drivers build config 00:01:55.826 bus/platform: not in enabled drivers build config 00:01:55.826 bus/vmbus: not in enabled drivers build config 00:01:55.826 common/cnxk: not in enabled drivers build config 00:01:55.826 common/mlx5: not in enabled drivers build config 00:01:55.826 common/nfp: not in enabled drivers build config 00:01:55.826 common/qat: not in enabled drivers build config 00:01:55.826 common/sfc_efx: not in enabled drivers build config 00:01:55.826 mempool/bucket: not in enabled drivers build config 00:01:55.826 mempool/cnxk: not in enabled drivers build config 00:01:55.826 mempool/dpaa: not in enabled drivers build config 00:01:55.826 mempool/dpaa2: not in enabled drivers build config 00:01:55.826 mempool/octeontx: not in enabled drivers build config 00:01:55.826 mempool/stack: not in enabled drivers build config 00:01:55.826 dma/cnxk: not in enabled drivers build config 00:01:55.826 dma/dpaa: not in enabled drivers build config 00:01:55.826 dma/dpaa2: not in enabled drivers build config 00:01:55.826 dma/hisilicon: not in enabled drivers build config 00:01:55.826 dma/idxd: not in enabled drivers build config 00:01:55.826 dma/ioat: not in enabled drivers build config 00:01:55.826 dma/skeleton: not in enabled drivers build config 00:01:55.826 net/af_packet: not in enabled drivers build config 00:01:55.826 net/af_xdp: not in enabled drivers build config 00:01:55.826 net/ark: not in enabled drivers build config 00:01:55.826 net/atlantic: not in enabled drivers build config 00:01:55.826 net/avp: not in enabled drivers build config 00:01:55.826 net/axgbe: not in enabled drivers build config 00:01:55.826 net/bnx2x: not in enabled drivers build config 00:01:55.826 net/bnxt: not in enabled drivers build config 00:01:55.826 net/bonding: not in enabled drivers build config 00:01:55.826 net/cnxk: not in enabled drivers build config 00:01:55.826 net/cpfl: not in enabled drivers build config 00:01:55.826 net/cxgbe: not in enabled drivers build config 00:01:55.826 net/dpaa: not in enabled drivers build config 00:01:55.826 net/dpaa2: not in enabled drivers build config 00:01:55.826 net/e1000: not in enabled drivers build config 00:01:55.826 net/ena: not in enabled drivers build config 00:01:55.826 net/enetc: not in enabled drivers build config 00:01:55.826 net/enetfec: not in enabled drivers build config 00:01:55.826 net/enic: not in enabled drivers build config 00:01:55.826 net/failsafe: not in enabled drivers build config 00:01:55.826 net/fm10k: not in enabled drivers build config 00:01:55.826 net/gve: not in enabled drivers build config 00:01:55.826 net/hinic: not in enabled drivers build config 00:01:55.826 net/hns3: not in enabled drivers build config 00:01:55.826 net/i40e: not in enabled drivers build config 00:01:55.826 net/iavf: not in enabled drivers build config 00:01:55.826 net/ice: not in enabled drivers build config 00:01:55.826 net/idpf: not in enabled drivers build config 00:01:55.826 net/igc: not in enabled drivers build config 00:01:55.826 net/ionic: not in enabled drivers build config 00:01:55.826 net/ipn3ke: not in enabled drivers build config 00:01:55.826 net/ixgbe: not in enabled drivers build config 00:01:55.826 net/mana: not in enabled drivers build config 00:01:55.826 net/memif: not in enabled drivers build config 00:01:55.826 net/mlx4: not in enabled drivers build config 00:01:55.826 net/mlx5: not in enabled drivers build config 00:01:55.826 net/mvneta: not in enabled drivers build config 00:01:55.826 net/mvpp2: not in enabled drivers build config 00:01:55.826 net/netvsc: not in enabled drivers build config 00:01:55.826 net/nfb: not in enabled drivers build config 00:01:55.826 net/nfp: not in enabled drivers build config 00:01:55.826 net/ngbe: not in enabled drivers build config 00:01:55.826 net/null: not in enabled drivers build config 00:01:55.826 net/octeontx: not in enabled drivers build config 00:01:55.826 net/octeon_ep: not in enabled drivers build config 00:01:55.826 net/pcap: not in enabled drivers build config 00:01:55.826 net/pfe: not in enabled drivers build config 00:01:55.826 net/qede: not in enabled drivers build config 00:01:55.826 net/ring: not in enabled drivers build config 00:01:55.826 net/sfc: not in enabled drivers build config 00:01:55.826 net/softnic: not in enabled drivers build config 00:01:55.826 net/tap: not in enabled drivers build config 00:01:55.826 net/thunderx: not in enabled drivers build config 00:01:55.826 net/txgbe: not in enabled drivers build config 00:01:55.826 net/vdev_netvsc: not in enabled drivers build config 00:01:55.826 net/vhost: not in enabled drivers build config 00:01:55.826 net/virtio: not in enabled drivers build config 00:01:55.826 net/vmxnet3: not in enabled drivers build config 00:01:55.826 raw/*: missing internal dependency, "rawdev" 00:01:55.826 crypto/armv8: not in enabled drivers build config 00:01:55.826 crypto/bcmfs: not in enabled drivers build config 00:01:55.826 crypto/caam_jr: not in enabled drivers build config 00:01:55.826 crypto/ccp: not in enabled drivers build config 00:01:55.826 crypto/cnxk: not in enabled drivers build config 00:01:55.826 crypto/dpaa_sec: not in enabled drivers build config 00:01:55.826 crypto/dpaa2_sec: not in enabled drivers build config 00:01:55.826 crypto/ipsec_mb: not in enabled drivers build config 00:01:55.826 crypto/mlx5: not in enabled drivers build config 00:01:55.826 crypto/mvsam: not in enabled drivers build config 00:01:55.826 crypto/nitrox: not in enabled drivers build config 00:01:55.826 crypto/null: not in enabled drivers build config 00:01:55.826 crypto/octeontx: not in enabled drivers build config 00:01:55.826 crypto/openssl: not in enabled drivers build config 00:01:55.826 crypto/scheduler: not in enabled drivers build config 00:01:55.826 crypto/uadk: not in enabled drivers build config 00:01:55.826 crypto/virtio: not in enabled drivers build config 00:01:55.826 compress/isal: not in enabled drivers build config 00:01:55.826 compress/mlx5: not in enabled drivers build config 00:01:55.826 compress/octeontx: not in enabled drivers build config 00:01:55.826 compress/zlib: not in enabled drivers build config 00:01:55.826 regex/*: missing internal dependency, "regexdev" 00:01:55.826 ml/*: missing internal dependency, "mldev" 00:01:55.826 vdpa/ifc: not in enabled drivers build config 00:01:55.826 vdpa/mlx5: not in enabled drivers build config 00:01:55.826 vdpa/nfp: not in enabled drivers build config 00:01:55.826 vdpa/sfc: not in enabled drivers build config 00:01:55.826 event/*: missing internal dependency, "eventdev" 00:01:55.826 baseband/*: missing internal dependency, "bbdev" 00:01:55.826 gpu/*: missing internal dependency, "gpudev" 00:01:55.826 00:01:55.826 00:01:55.826 Build targets in project: 85 00:01:55.826 00:01:55.826 DPDK 23.11.0 00:01:55.826 00:01:55.826 User defined options 00:01:55.826 buildtype : debug 00:01:55.826 default_library : static 00:01:55.826 libdir : lib 00:01:55.826 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:55.826 c_args : -fPIC -Werror 00:01:55.826 c_link_args : 00:01:55.826 cpu_instruction_set: native 00:01:55.826 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:55.826 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,pcapng,bbdev 00:01:55.826 enable_docs : false 00:01:55.826 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:55.826 enable_kmods : false 00:01:55.826 tests : false 00:01:55.826 00:01:55.826 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:56.093 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:56.093 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:56.093 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:56.093 [3/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:56.093 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:56.093 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:56.093 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:56.093 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:56.093 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:56.093 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:56.093 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:56.093 [11/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:56.093 [12/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:56.093 [13/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:56.093 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:56.093 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:56.093 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:56.093 [17/265] Linking static target lib/librte_kvargs.a 00:01:56.093 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:56.093 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:56.093 [20/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:56.093 [21/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:56.093 [22/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:56.093 [23/265] Linking static target lib/librte_log.a 00:01:56.093 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:56.093 [25/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:56.093 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:56.093 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:56.093 [28/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:56.093 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:56.093 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:56.093 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:56.093 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:56.093 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:56.093 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:56.093 [35/265] Linking static target lib/librte_pci.a 00:01:56.093 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:56.093 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:56.093 [38/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:56.093 [39/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:56.093 [40/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:56.350 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:56.350 [42/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.350 [43/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.608 [44/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:56.608 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:56.608 [46/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:56.608 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:56.608 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:56.608 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:56.608 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:56.608 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:56.608 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:56.608 [53/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:56.608 [54/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:56.608 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:56.608 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:56.608 [57/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:56.608 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:56.608 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:56.608 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:56.608 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:56.608 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:56.608 [63/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:56.608 [64/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:56.608 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:56.608 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:56.608 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:56.608 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:56.608 [69/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:56.608 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:56.608 [71/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:56.608 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:56.608 [73/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:56.608 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:56.608 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:56.608 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:56.608 [77/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:56.608 [78/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:56.608 [79/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:56.608 [80/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:56.608 [81/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:56.608 [82/265] Linking static target lib/librte_telemetry.a 00:01:56.608 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:56.608 [84/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:56.608 [85/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:56.608 [86/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:56.608 [87/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:56.608 [88/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:56.608 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:56.608 [90/265] Linking static target lib/librte_meter.a 00:01:56.608 [91/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:56.608 [92/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:56.608 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:56.608 [94/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:56.608 [95/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:56.608 [96/265] Linking static target lib/librte_ring.a 00:01:56.608 [97/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:56.608 [98/265] Linking static target lib/librte_timer.a 00:01:56.608 [99/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:56.608 [100/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:56.608 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:56.608 [102/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:56.608 [103/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:56.608 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:56.608 [105/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:56.608 [106/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:56.608 [107/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:56.608 [108/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:56.608 [109/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:56.608 [110/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:56.608 [111/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:56.608 [112/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:56.608 [113/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:56.608 [114/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:56.608 [115/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:56.608 [116/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:56.608 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:56.608 [118/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:56.608 [119/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:56.608 [120/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:56.608 [121/265] Linking static target lib/librte_cmdline.a 00:01:56.608 [122/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:56.608 [123/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:56.608 [124/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.608 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:56.608 [126/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:56.608 [127/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:56.608 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:56.608 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:56.608 [130/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:56.608 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:56.608 [132/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:56.608 [133/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:56.608 [134/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:56.608 [135/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:56.866 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:56.866 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:56.866 [138/265] Linking target lib/librte_log.so.24.0 00:01:56.866 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:56.866 [140/265] Linking static target lib/librte_mempool.a 00:01:56.866 [141/265] Linking static target lib/librte_eal.a 00:01:56.866 [142/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:56.866 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:56.866 [144/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:56.866 [145/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:56.866 [146/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:56.866 [147/265] Linking static target lib/librte_net.a 00:01:56.866 [148/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:56.866 [149/265] Linking static target lib/librte_dmadev.a 00:01:56.866 [150/265] Linking static target lib/librte_rcu.a 00:01:56.866 [151/265] Linking static target lib/librte_compressdev.a 00:01:56.866 [152/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:56.866 [153/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:56.866 [154/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:56.866 [155/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:56.866 [156/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:56.866 [157/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:56.866 [158/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:56.866 [159/265] Linking static target lib/librte_mbuf.a 00:01:56.866 [160/265] Linking static target lib/librte_reorder.a 00:01:56.866 [161/265] Linking static target lib/librte_hash.a 00:01:56.866 [162/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:56.866 [163/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:56.866 [164/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:56.866 [165/265] Linking static target lib/librte_power.a 00:01:56.866 [166/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:56.866 [167/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:56.866 [168/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:56.866 [169/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.866 [170/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:56.866 [171/265] Linking target lib/librte_kvargs.so.24.0 00:01:56.866 [172/265] Linking static target lib/librte_security.a 00:01:56.866 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:56.866 [174/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:56.866 [175/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:56.866 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:57.124 [177/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.124 [178/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:57.124 [179/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:57.124 [180/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:57.124 [181/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:57.124 [182/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:57.124 [183/265] Linking static target lib/librte_cryptodev.a 00:01:57.124 [184/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:57.124 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:57.124 [186/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:57.124 [187/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:57.124 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:57.124 [189/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:57.124 [190/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:57.124 [191/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:57.124 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:57.124 [193/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.124 [194/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.124 [195/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.124 [196/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.124 [197/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:57.124 [198/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:57.124 [199/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:57.124 [200/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:57.383 [201/265] Linking target lib/librte_telemetry.so.24.0 00:01:57.383 [202/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:57.383 [203/265] Linking static target drivers/librte_bus_vdev.a 00:01:57.383 [204/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:57.383 [205/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.383 [206/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.383 [207/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.383 [208/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:57.383 [209/265] Linking static target drivers/librte_mempool_ring.a 00:01:57.383 [210/265] Linking static target lib/librte_ethdev.a 00:01:57.383 [211/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.383 [212/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:57.383 [213/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:57.383 [214/265] Linking static target drivers/librte_bus_pci.a 00:01:57.383 [215/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:57.641 [216/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.641 [217/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.641 [218/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.641 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.641 [220/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.641 [221/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.900 [222/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.900 [223/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:57.900 [224/265] Linking static target lib/librte_vhost.a 00:01:57.900 [225/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.159 [226/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.534 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.101 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.667 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.977 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.978 [231/265] Linking target lib/librte_eal.so.24.0 00:02:09.978 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:09.978 [233/265] Linking target lib/librte_timer.so.24.0 00:02:09.978 [234/265] Linking target lib/librte_dmadev.so.24.0 00:02:09.978 [235/265] Linking target lib/librte_pci.so.24.0 00:02:09.978 [236/265] Linking target lib/librte_meter.so.24.0 00:02:09.978 [237/265] Linking target lib/librte_ring.so.24.0 00:02:09.978 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:09.978 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:09.978 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:09.978 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:09.978 [242/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:09.978 [243/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:10.236 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:10.236 [245/265] Linking target lib/librte_mempool.so.24.0 00:02:10.236 [246/265] Linking target lib/librte_rcu.so.24.0 00:02:10.236 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:10.236 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:10.236 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:10.236 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:10.495 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:10.495 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:02:10.495 [253/265] Linking target lib/librte_compressdev.so.24.0 00:02:10.495 [254/265] Linking target lib/librte_reorder.so.24.0 00:02:10.495 [255/265] Linking target lib/librte_net.so.24.0 00:02:10.754 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:10.754 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:10.754 [258/265] Linking target lib/librte_security.so.24.0 00:02:10.754 [259/265] Linking target lib/librte_hash.so.24.0 00:02:10.754 [260/265] Linking target lib/librte_ethdev.so.24.0 00:02:10.754 [261/265] Linking target lib/librte_cmdline.so.24.0 00:02:10.754 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:11.012 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:11.012 [264/265] Linking target lib/librte_power.so.24.0 00:02:11.012 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:11.012 INFO: autodetecting backend as ninja 00:02:11.012 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:11.946 CC lib/ut/ut.o 00:02:11.946 CC lib/log/log_flags.o 00:02:11.946 CC lib/log/log.o 00:02:11.946 CC lib/ut_mock/mock.o 00:02:11.946 CC lib/log/log_deprecated.o 00:02:12.204 LIB libspdk_ut_mock.a 00:02:12.204 LIB libspdk_ut.a 00:02:12.204 LIB libspdk_log.a 00:02:12.462 CC lib/ioat/ioat.o 00:02:12.462 CC lib/dma/dma.o 00:02:12.462 CXX lib/trace_parser/trace.o 00:02:12.462 CC lib/util/base64.o 00:02:12.462 CC lib/util/bit_array.o 00:02:12.462 CC lib/util/cpuset.o 00:02:12.462 CC lib/util/crc32c.o 00:02:12.462 CC lib/util/crc16.o 00:02:12.462 CC lib/util/crc32.o 00:02:12.462 CC lib/util/crc32_ieee.o 00:02:12.462 CC lib/util/crc64.o 00:02:12.462 CC lib/util/dif.o 00:02:12.462 CC lib/util/fd.o 00:02:12.462 CC lib/util/iov.o 00:02:12.462 CC lib/util/file.o 00:02:12.462 CC lib/util/hexlify.o 00:02:12.462 CC lib/util/math.o 00:02:12.462 CC lib/util/pipe.o 00:02:12.462 CC lib/util/strerror_tls.o 00:02:12.462 CC lib/util/string.o 00:02:12.462 CC lib/util/uuid.o 00:02:12.462 CC lib/util/fd_group.o 00:02:12.462 CC lib/util/xor.o 00:02:12.462 CC lib/util/zipf.o 00:02:12.462 CC lib/vfio_user/host/vfio_user_pci.o 00:02:12.462 CC lib/vfio_user/host/vfio_user.o 00:02:12.462 LIB libspdk_dma.a 00:02:12.462 LIB libspdk_ioat.a 00:02:12.719 LIB libspdk_vfio_user.a 00:02:12.719 LIB libspdk_util.a 00:02:12.977 LIB libspdk_trace_parser.a 00:02:12.977 CC lib/env_dpdk/memory.o 00:02:12.977 CC lib/env_dpdk/env.o 00:02:12.977 CC lib/env_dpdk/init.o 00:02:12.977 CC lib/env_dpdk/pci.o 00:02:12.977 CC lib/env_dpdk/threads.o 00:02:12.977 CC lib/env_dpdk/pci_ioat.o 00:02:12.977 CC lib/env_dpdk/pci_virtio.o 00:02:12.977 CC lib/env_dpdk/pci_event.o 00:02:12.977 CC lib/env_dpdk/pci_vmd.o 00:02:12.977 CC lib/env_dpdk/pci_idxd.o 00:02:12.977 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:12.977 CC lib/env_dpdk/sigbus_handler.o 00:02:12.977 CC lib/env_dpdk/pci_dpdk.o 00:02:12.977 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:12.977 CC lib/conf/conf.o 00:02:12.977 CC lib/vmd/vmd.o 00:02:12.977 CC lib/vmd/led.o 00:02:12.977 CC lib/idxd/idxd.o 00:02:12.977 CC lib/idxd/idxd_user.o 00:02:12.977 CC lib/idxd/idxd_kernel.o 00:02:12.977 CC lib/rdma/common.o 00:02:12.977 CC lib/rdma/rdma_verbs.o 00:02:12.977 CC lib/json/json_parse.o 00:02:12.977 CC lib/json/json_util.o 00:02:12.977 CC lib/json/json_write.o 00:02:13.235 LIB libspdk_conf.a 00:02:13.235 LIB libspdk_rdma.a 00:02:13.235 LIB libspdk_json.a 00:02:13.235 LIB libspdk_idxd.a 00:02:13.493 LIB libspdk_vmd.a 00:02:13.493 CC lib/jsonrpc/jsonrpc_server.o 00:02:13.493 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:13.493 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:13.493 CC lib/jsonrpc/jsonrpc_client.o 00:02:13.752 LIB libspdk_jsonrpc.a 00:02:14.010 LIB libspdk_env_dpdk.a 00:02:14.010 CC lib/rpc/rpc.o 00:02:14.269 LIB libspdk_rpc.a 00:02:14.527 CC lib/sock/sock.o 00:02:14.527 CC lib/sock/sock_rpc.o 00:02:14.527 CC lib/notify/notify.o 00:02:14.527 CC lib/trace/trace_rpc.o 00:02:14.527 CC lib/trace/trace.o 00:02:14.527 CC lib/notify/notify_rpc.o 00:02:14.527 CC lib/trace/trace_flags.o 00:02:14.527 LIB libspdk_notify.a 00:02:14.527 LIB libspdk_trace.a 00:02:14.786 LIB libspdk_sock.a 00:02:15.044 CC lib/thread/thread.o 00:02:15.044 CC lib/thread/iobuf.o 00:02:15.044 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:15.044 CC lib/nvme/nvme_ctrlr.o 00:02:15.044 CC lib/nvme/nvme_fabric.o 00:02:15.044 CC lib/nvme/nvme_ns_cmd.o 00:02:15.044 CC lib/nvme/nvme_ns.o 00:02:15.044 CC lib/nvme/nvme_pcie_common.o 00:02:15.044 CC lib/nvme/nvme_pcie.o 00:02:15.044 CC lib/nvme/nvme_qpair.o 00:02:15.044 CC lib/nvme/nvme.o 00:02:15.044 CC lib/nvme/nvme_quirks.o 00:02:15.044 CC lib/nvme/nvme_transport.o 00:02:15.044 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:15.044 CC lib/nvme/nvme_discovery.o 00:02:15.044 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:15.044 CC lib/nvme/nvme_tcp.o 00:02:15.044 CC lib/nvme/nvme_opal.o 00:02:15.044 CC lib/nvme/nvme_io_msg.o 00:02:15.044 CC lib/nvme/nvme_poll_group.o 00:02:15.044 CC lib/nvme/nvme_zns.o 00:02:15.044 CC lib/nvme/nvme_cuse.o 00:02:15.044 CC lib/nvme/nvme_vfio_user.o 00:02:15.044 CC lib/nvme/nvme_rdma.o 00:02:15.612 LIB libspdk_thread.a 00:02:15.870 CC lib/virtio/virtio.o 00:02:15.870 CC lib/virtio/virtio_vhost_user.o 00:02:15.870 CC lib/virtio/virtio_vfio_user.o 00:02:15.870 CC lib/accel/accel.o 00:02:15.870 CC lib/accel/accel_rpc.o 00:02:15.870 CC lib/virtio/virtio_pci.o 00:02:15.870 CC lib/accel/accel_sw.o 00:02:16.128 CC lib/blob/zeroes.o 00:02:16.128 CC lib/blob/blobstore.o 00:02:16.128 CC lib/blob/request.o 00:02:16.128 CC lib/blob/blob_bs_dev.o 00:02:16.128 CC lib/init/json_config.o 00:02:16.128 CC lib/init/rpc.o 00:02:16.128 CC lib/init/subsystem.o 00:02:16.128 CC lib/init/subsystem_rpc.o 00:02:16.128 CC lib/vfu_tgt/tgt_endpoint.o 00:02:16.128 CC lib/vfu_tgt/tgt_rpc.o 00:02:16.128 LIB libspdk_virtio.a 00:02:16.128 LIB libspdk_nvme.a 00:02:16.128 LIB libspdk_init.a 00:02:16.128 LIB libspdk_vfu_tgt.a 00:02:16.386 CC lib/event/app.o 00:02:16.386 CC lib/event/reactor.o 00:02:16.386 CC lib/event/scheduler_static.o 00:02:16.386 CC lib/event/log_rpc.o 00:02:16.386 CC lib/event/app_rpc.o 00:02:16.644 LIB libspdk_accel.a 00:02:16.644 LIB libspdk_event.a 00:02:16.903 CC lib/bdev/bdev.o 00:02:16.903 CC lib/bdev/bdev_rpc.o 00:02:16.903 CC lib/bdev/bdev_zone.o 00:02:16.903 CC lib/bdev/part.o 00:02:16.903 CC lib/bdev/scsi_nvme.o 00:02:17.469 LIB libspdk_blob.a 00:02:17.727 CC lib/lvol/lvol.o 00:02:17.727 CC lib/blobfs/blobfs.o 00:02:17.727 CC lib/blobfs/tree.o 00:02:18.297 LIB libspdk_lvol.a 00:02:18.297 LIB libspdk_blobfs.a 00:02:18.554 LIB libspdk_bdev.a 00:02:18.812 CC lib/ublk/ublk.o 00:02:18.812 CC lib/ublk/ublk_rpc.o 00:02:18.812 CC lib/nbd/nbd.o 00:02:18.812 CC lib/nbd/nbd_rpc.o 00:02:18.812 CC lib/scsi/dev.o 00:02:18.812 CC lib/scsi/lun.o 00:02:18.812 CC lib/scsi/port.o 00:02:18.812 CC lib/scsi/scsi.o 00:02:18.812 CC lib/scsi/scsi_pr.o 00:02:18.812 CC lib/scsi/scsi_bdev.o 00:02:18.812 CC lib/scsi/scsi_rpc.o 00:02:18.812 CC lib/scsi/task.o 00:02:18.812 CC lib/ftl/ftl_layout.o 00:02:18.812 CC lib/ftl/ftl_core.o 00:02:18.812 CC lib/ftl/ftl_debug.o 00:02:18.812 CC lib/ftl/ftl_init.o 00:02:18.812 CC lib/nvmf/ctrlr.o 00:02:18.812 CC lib/nvmf/ctrlr_discovery.o 00:02:18.812 CC lib/ftl/ftl_io.o 00:02:18.812 CC lib/nvmf/ctrlr_bdev.o 00:02:18.812 CC lib/ftl/ftl_sb.o 00:02:18.812 CC lib/nvmf/nvmf_rpc.o 00:02:18.812 CC lib/nvmf/subsystem.o 00:02:18.812 CC lib/ftl/ftl_l2p.o 00:02:18.812 CC lib/nvmf/nvmf.o 00:02:18.812 CC lib/ftl/ftl_l2p_flat.o 00:02:18.812 CC lib/ftl/ftl_nv_cache.o 00:02:18.812 CC lib/nvmf/vfio_user.o 00:02:18.812 CC lib/nvmf/transport.o 00:02:18.812 CC lib/ftl/ftl_band.o 00:02:18.812 CC lib/nvmf/tcp.o 00:02:18.812 CC lib/ftl/ftl_band_ops.o 00:02:18.812 CC lib/nvmf/rdma.o 00:02:18.812 CC lib/ftl/ftl_writer.o 00:02:18.812 CC lib/ftl/ftl_rq.o 00:02:18.812 CC lib/ftl/ftl_reloc.o 00:02:18.812 CC lib/ftl/ftl_l2p_cache.o 00:02:18.812 CC lib/ftl/ftl_p2l.o 00:02:18.812 CC lib/ftl/mngt/ftl_mngt.o 00:02:18.812 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:18.812 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:18.812 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:18.812 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:18.812 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:18.812 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:18.812 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:18.812 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:18.812 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:18.812 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:18.812 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:18.812 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:18.812 CC lib/ftl/utils/ftl_conf.o 00:02:18.813 CC lib/ftl/utils/ftl_md.o 00:02:18.813 CC lib/ftl/utils/ftl_mempool.o 00:02:18.813 CC lib/ftl/utils/ftl_property.o 00:02:18.813 CC lib/ftl/utils/ftl_bitmap.o 00:02:18.813 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:18.813 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:18.813 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:18.813 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:18.813 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:18.813 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:18.813 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:18.813 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:18.813 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:18.813 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:18.813 CC lib/ftl/base/ftl_base_dev.o 00:02:18.813 CC lib/ftl/base/ftl_base_bdev.o 00:02:18.813 CC lib/ftl/ftl_trace.o 00:02:19.071 LIB libspdk_nbd.a 00:02:19.329 LIB libspdk_scsi.a 00:02:19.329 LIB libspdk_ublk.a 00:02:19.588 LIB libspdk_ftl.a 00:02:19.588 CC lib/vhost/vhost.o 00:02:19.588 CC lib/vhost/vhost_rpc.o 00:02:19.588 CC lib/vhost/vhost_scsi.o 00:02:19.588 CC lib/vhost/vhost_blk.o 00:02:19.588 CC lib/vhost/rte_vhost_user.o 00:02:19.588 CC lib/iscsi/init_grp.o 00:02:19.588 CC lib/iscsi/conn.o 00:02:19.588 CC lib/iscsi/md5.o 00:02:19.588 CC lib/iscsi/iscsi.o 00:02:19.588 CC lib/iscsi/tgt_node.o 00:02:19.588 CC lib/iscsi/param.o 00:02:19.588 CC lib/iscsi/iscsi_subsystem.o 00:02:19.588 CC lib/iscsi/portal_grp.o 00:02:19.588 CC lib/iscsi/task.o 00:02:19.588 CC lib/iscsi/iscsi_rpc.o 00:02:20.155 LIB libspdk_nvmf.a 00:02:20.155 LIB libspdk_vhost.a 00:02:20.413 LIB libspdk_iscsi.a 00:02:20.672 CC module/env_dpdk/env_dpdk_rpc.o 00:02:20.672 CC module/vfu_device/vfu_virtio.o 00:02:20.672 CC module/vfu_device/vfu_virtio_scsi.o 00:02:20.672 CC module/vfu_device/vfu_virtio_rpc.o 00:02:20.672 CC module/vfu_device/vfu_virtio_blk.o 00:02:20.930 LIB libspdk_env_dpdk_rpc.a 00:02:20.930 CC module/sock/posix/posix.o 00:02:20.930 CC module/accel/error/accel_error.o 00:02:20.930 CC module/accel/error/accel_error_rpc.o 00:02:20.930 CC module/accel/iaa/accel_iaa_rpc.o 00:02:20.930 CC module/accel/iaa/accel_iaa.o 00:02:20.930 CC module/accel/dsa/accel_dsa.o 00:02:20.930 CC module/accel/dsa/accel_dsa_rpc.o 00:02:20.930 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:20.930 CC module/blob/bdev/blob_bdev.o 00:02:20.930 CC module/accel/ioat/accel_ioat.o 00:02:20.930 CC module/accel/ioat/accel_ioat_rpc.o 00:02:20.930 CC module/scheduler/gscheduler/gscheduler.o 00:02:20.930 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:20.930 LIB libspdk_scheduler_dpdk_governor.a 00:02:20.930 LIB libspdk_accel_error.a 00:02:20.930 LIB libspdk_scheduler_gscheduler.a 00:02:20.930 LIB libspdk_accel_iaa.a 00:02:20.930 LIB libspdk_scheduler_dynamic.a 00:02:20.930 LIB libspdk_accel_ioat.a 00:02:20.930 LIB libspdk_accel_dsa.a 00:02:21.188 LIB libspdk_blob_bdev.a 00:02:21.188 LIB libspdk_vfu_device.a 00:02:21.188 LIB libspdk_sock_posix.a 00:02:21.447 CC module/bdev/split/vbdev_split_rpc.o 00:02:21.447 CC module/bdev/split/vbdev_split.o 00:02:21.447 CC module/bdev/lvol/vbdev_lvol.o 00:02:21.447 CC module/bdev/error/vbdev_error_rpc.o 00:02:21.447 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:21.447 CC module/blobfs/bdev/blobfs_bdev.o 00:02:21.447 CC module/bdev/error/vbdev_error.o 00:02:21.447 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:21.447 CC module/bdev/nvme/bdev_nvme.o 00:02:21.447 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:21.447 CC module/bdev/passthru/vbdev_passthru.o 00:02:21.447 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:21.447 CC module/bdev/nvme/bdev_mdns_client.o 00:02:21.447 CC module/bdev/nvme/nvme_rpc.o 00:02:21.447 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:21.447 CC module/bdev/delay/vbdev_delay.o 00:02:21.447 CC module/bdev/nvme/vbdev_opal.o 00:02:21.447 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:21.447 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:21.447 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:21.447 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:21.447 CC module/bdev/raid/bdev_raid_rpc.o 00:02:21.447 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:21.447 CC module/bdev/raid/bdev_raid_sb.o 00:02:21.447 CC module/bdev/gpt/gpt.o 00:02:21.447 CC module/bdev/raid/bdev_raid.o 00:02:21.447 CC module/bdev/gpt/vbdev_gpt.o 00:02:21.447 CC module/bdev/raid/raid0.o 00:02:21.447 CC module/bdev/iscsi/bdev_iscsi.o 00:02:21.447 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:21.447 CC module/bdev/raid/concat.o 00:02:21.447 CC module/bdev/raid/raid1.o 00:02:21.447 CC module/bdev/malloc/bdev_malloc.o 00:02:21.447 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:21.447 CC module/bdev/null/bdev_null.o 00:02:21.447 CC module/bdev/null/bdev_null_rpc.o 00:02:21.447 CC module/bdev/aio/bdev_aio_rpc.o 00:02:21.447 CC module/bdev/aio/bdev_aio.o 00:02:21.447 CC module/bdev/ftl/bdev_ftl.o 00:02:21.447 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:21.447 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:21.447 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:21.706 LIB libspdk_blobfs_bdev.a 00:02:21.706 LIB libspdk_bdev_split.a 00:02:21.706 LIB libspdk_bdev_error.a 00:02:21.706 LIB libspdk_bdev_gpt.a 00:02:21.706 LIB libspdk_bdev_null.a 00:02:21.706 LIB libspdk_bdev_passthru.a 00:02:21.706 LIB libspdk_bdev_ftl.a 00:02:21.706 LIB libspdk_bdev_aio.a 00:02:21.706 LIB libspdk_bdev_iscsi.a 00:02:21.706 LIB libspdk_bdev_delay.a 00:02:21.706 LIB libspdk_bdev_zone_block.a 00:02:21.706 LIB libspdk_bdev_malloc.a 00:02:21.706 LIB libspdk_bdev_lvol.a 00:02:21.706 LIB libspdk_bdev_virtio.a 00:02:21.966 LIB libspdk_bdev_raid.a 00:02:22.534 LIB libspdk_bdev_nvme.a 00:02:23.158 CC module/event/subsystems/sock/sock.o 00:02:23.158 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:23.158 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:23.158 CC module/event/subsystems/vmd/vmd.o 00:02:23.158 CC module/event/subsystems/scheduler/scheduler.o 00:02:23.158 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:23.158 CC module/event/subsystems/iobuf/iobuf.o 00:02:23.158 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:23.417 LIB libspdk_event_sock.a 00:02:23.417 LIB libspdk_event_vhost_blk.a 00:02:23.417 LIB libspdk_event_vmd.a 00:02:23.417 LIB libspdk_event_vfu_tgt.a 00:02:23.417 LIB libspdk_event_scheduler.a 00:02:23.418 LIB libspdk_event_iobuf.a 00:02:23.676 CC module/event/subsystems/accel/accel.o 00:02:23.676 LIB libspdk_event_accel.a 00:02:24.245 CC module/event/subsystems/bdev/bdev.o 00:02:24.245 LIB libspdk_event_bdev.a 00:02:24.504 CC module/event/subsystems/nbd/nbd.o 00:02:24.504 CC module/event/subsystems/ublk/ublk.o 00:02:24.504 CC module/event/subsystems/scsi/scsi.o 00:02:24.504 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:24.504 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:24.504 LIB libspdk_event_nbd.a 00:02:24.504 LIB libspdk_event_ublk.a 00:02:24.763 LIB libspdk_event_scsi.a 00:02:24.763 LIB libspdk_event_nvmf.a 00:02:25.022 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:25.022 CC module/event/subsystems/iscsi/iscsi.o 00:02:25.022 LIB libspdk_event_vhost_scsi.a 00:02:25.022 LIB libspdk_event_iscsi.a 00:02:25.281 TEST_HEADER include/spdk/accel.h 00:02:25.281 TEST_HEADER include/spdk/accel_module.h 00:02:25.281 TEST_HEADER include/spdk/barrier.h 00:02:25.281 TEST_HEADER include/spdk/assert.h 00:02:25.281 TEST_HEADER include/spdk/base64.h 00:02:25.281 TEST_HEADER include/spdk/bdev.h 00:02:25.281 CC test/rpc_client/rpc_client_test.o 00:02:25.281 TEST_HEADER include/spdk/bit_array.h 00:02:25.281 TEST_HEADER include/spdk/bdev_module.h 00:02:25.281 TEST_HEADER include/spdk/bdev_zone.h 00:02:25.281 TEST_HEADER include/spdk/bit_pool.h 00:02:25.281 TEST_HEADER include/spdk/blob_bdev.h 00:02:25.281 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:25.281 TEST_HEADER include/spdk/blobfs.h 00:02:25.281 TEST_HEADER include/spdk/blob.h 00:02:25.281 TEST_HEADER include/spdk/conf.h 00:02:25.281 TEST_HEADER include/spdk/config.h 00:02:25.281 TEST_HEADER include/spdk/cpuset.h 00:02:25.281 TEST_HEADER include/spdk/crc32.h 00:02:25.281 TEST_HEADER include/spdk/crc16.h 00:02:25.281 TEST_HEADER include/spdk/crc64.h 00:02:25.281 TEST_HEADER include/spdk/dif.h 00:02:25.281 TEST_HEADER include/spdk/dma.h 00:02:25.281 TEST_HEADER include/spdk/env_dpdk.h 00:02:25.281 TEST_HEADER include/spdk/endian.h 00:02:25.281 TEST_HEADER include/spdk/env.h 00:02:25.281 TEST_HEADER include/spdk/event.h 00:02:25.281 TEST_HEADER include/spdk/fd_group.h 00:02:25.281 TEST_HEADER include/spdk/fd.h 00:02:25.281 TEST_HEADER include/spdk/file.h 00:02:25.281 TEST_HEADER include/spdk/gpt_spec.h 00:02:25.281 TEST_HEADER include/spdk/ftl.h 00:02:25.281 TEST_HEADER include/spdk/hexlify.h 00:02:25.281 TEST_HEADER include/spdk/histogram_data.h 00:02:25.281 TEST_HEADER include/spdk/idxd.h 00:02:25.281 TEST_HEADER include/spdk/idxd_spec.h 00:02:25.281 TEST_HEADER include/spdk/init.h 00:02:25.281 TEST_HEADER include/spdk/ioat.h 00:02:25.281 TEST_HEADER include/spdk/ioat_spec.h 00:02:25.281 CC app/spdk_lspci/spdk_lspci.o 00:02:25.281 CC app/spdk_nvme_identify/identify.o 00:02:25.281 CC app/trace_record/trace_record.o 00:02:25.281 TEST_HEADER include/spdk/iscsi_spec.h 00:02:25.281 TEST_HEADER include/spdk/json.h 00:02:25.543 TEST_HEADER include/spdk/likely.h 00:02:25.543 TEST_HEADER include/spdk/jsonrpc.h 00:02:25.543 TEST_HEADER include/spdk/log.h 00:02:25.543 CC app/spdk_nvme_discover/discovery_aer.o 00:02:25.543 TEST_HEADER include/spdk/memory.h 00:02:25.543 TEST_HEADER include/spdk/lvol.h 00:02:25.544 CC app/spdk_nvme_perf/perf.o 00:02:25.544 TEST_HEADER include/spdk/mmio.h 00:02:25.544 TEST_HEADER include/spdk/notify.h 00:02:25.544 TEST_HEADER include/spdk/nbd.h 00:02:25.544 TEST_HEADER include/spdk/nvme.h 00:02:25.544 CXX app/trace/trace.o 00:02:25.544 TEST_HEADER include/spdk/nvme_intel.h 00:02:25.544 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:25.544 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:25.544 TEST_HEADER include/spdk/nvme_spec.h 00:02:25.544 TEST_HEADER include/spdk/nvme_zns.h 00:02:25.544 CC app/spdk_top/spdk_top.o 00:02:25.544 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:25.544 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:25.544 TEST_HEADER include/spdk/nvmf.h 00:02:25.544 TEST_HEADER include/spdk/nvmf_spec.h 00:02:25.544 TEST_HEADER include/spdk/nvmf_transport.h 00:02:25.544 TEST_HEADER include/spdk/opal.h 00:02:25.544 TEST_HEADER include/spdk/opal_spec.h 00:02:25.544 TEST_HEADER include/spdk/pci_ids.h 00:02:25.544 TEST_HEADER include/spdk/pipe.h 00:02:25.544 TEST_HEADER include/spdk/queue.h 00:02:25.544 TEST_HEADER include/spdk/reduce.h 00:02:25.544 TEST_HEADER include/spdk/rpc.h 00:02:25.544 TEST_HEADER include/spdk/scheduler.h 00:02:25.544 TEST_HEADER include/spdk/scsi.h 00:02:25.544 TEST_HEADER include/spdk/scsi_spec.h 00:02:25.544 TEST_HEADER include/spdk/sock.h 00:02:25.544 TEST_HEADER include/spdk/stdinc.h 00:02:25.544 TEST_HEADER include/spdk/string.h 00:02:25.544 TEST_HEADER include/spdk/thread.h 00:02:25.544 TEST_HEADER include/spdk/trace.h 00:02:25.544 TEST_HEADER include/spdk/tree.h 00:02:25.544 TEST_HEADER include/spdk/trace_parser.h 00:02:25.544 TEST_HEADER include/spdk/ublk.h 00:02:25.544 TEST_HEADER include/spdk/util.h 00:02:25.544 TEST_HEADER include/spdk/uuid.h 00:02:25.544 TEST_HEADER include/spdk/version.h 00:02:25.544 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:25.544 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:25.544 TEST_HEADER include/spdk/vhost.h 00:02:25.544 TEST_HEADER include/spdk/vmd.h 00:02:25.544 TEST_HEADER include/spdk/xor.h 00:02:25.544 CXX test/cpp_headers/accel.o 00:02:25.544 TEST_HEADER include/spdk/zipf.h 00:02:25.544 CXX test/cpp_headers/accel_module.o 00:02:25.544 CXX test/cpp_headers/assert.o 00:02:25.544 CXX test/cpp_headers/barrier.o 00:02:25.544 CXX test/cpp_headers/base64.o 00:02:25.544 CC app/spdk_dd/spdk_dd.o 00:02:25.544 CC app/iscsi_tgt/iscsi_tgt.o 00:02:25.544 CXX test/cpp_headers/bdev.o 00:02:25.544 CXX test/cpp_headers/bdev_module.o 00:02:25.544 CXX test/cpp_headers/bdev_zone.o 00:02:25.544 CXX test/cpp_headers/bit_array.o 00:02:25.544 CXX test/cpp_headers/bit_pool.o 00:02:25.544 CXX test/cpp_headers/blob_bdev.o 00:02:25.544 CXX test/cpp_headers/blob.o 00:02:25.544 CXX test/cpp_headers/blobfs.o 00:02:25.544 CXX test/cpp_headers/blobfs_bdev.o 00:02:25.544 CXX test/cpp_headers/conf.o 00:02:25.544 CXX test/cpp_headers/config.o 00:02:25.544 CXX test/cpp_headers/crc16.o 00:02:25.544 CXX test/cpp_headers/cpuset.o 00:02:25.544 CXX test/cpp_headers/crc32.o 00:02:25.544 CXX test/cpp_headers/crc64.o 00:02:25.544 CXX test/cpp_headers/dma.o 00:02:25.544 CXX test/cpp_headers/dif.o 00:02:25.544 CXX test/cpp_headers/endian.o 00:02:25.544 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:25.544 CXX test/cpp_headers/env_dpdk.o 00:02:25.544 CXX test/cpp_headers/env.o 00:02:25.544 CXX test/cpp_headers/event.o 00:02:25.544 CXX test/cpp_headers/fd_group.o 00:02:25.544 CXX test/cpp_headers/fd.o 00:02:25.544 CC app/nvmf_tgt/nvmf_main.o 00:02:25.544 CXX test/cpp_headers/file.o 00:02:25.544 CXX test/cpp_headers/ftl.o 00:02:25.544 CC app/vhost/vhost.o 00:02:25.544 CXX test/cpp_headers/gpt_spec.o 00:02:25.544 CXX test/cpp_headers/hexlify.o 00:02:25.544 CXX test/cpp_headers/histogram_data.o 00:02:25.544 CXX test/cpp_headers/idxd.o 00:02:25.544 CXX test/cpp_headers/idxd_spec.o 00:02:25.544 CXX test/cpp_headers/init.o 00:02:25.544 CC test/app/histogram_perf/histogram_perf.o 00:02:25.544 CC test/app/jsoncat/jsoncat.o 00:02:25.544 CC test/nvme/reset/reset.o 00:02:25.544 CC test/nvme/aer/aer.o 00:02:25.544 CC test/nvme/err_injection/err_injection.o 00:02:25.544 CC test/nvme/e2edp/nvme_dp.o 00:02:25.544 CC test/nvme/sgl/sgl.o 00:02:25.544 CC test/nvme/overhead/overhead.o 00:02:25.544 CC test/nvme/startup/startup.o 00:02:25.544 CC test/env/vtophys/vtophys.o 00:02:25.544 CC test/nvme/simple_copy/simple_copy.o 00:02:25.544 CC test/nvme/cuse/cuse.o 00:02:25.544 CC test/app/stub/stub.o 00:02:25.544 CXX test/cpp_headers/ioat.o 00:02:25.544 CC app/spdk_tgt/spdk_tgt.o 00:02:25.544 CC test/nvme/boot_partition/boot_partition.o 00:02:25.544 CC test/env/pci/pci_ut.o 00:02:25.544 CC test/nvme/fused_ordering/fused_ordering.o 00:02:25.544 CC test/nvme/reserve/reserve.o 00:02:25.544 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:25.544 CC test/nvme/compliance/nvme_compliance.o 00:02:25.544 CC test/env/memory/memory_ut.o 00:02:25.544 CC test/nvme/connect_stress/connect_stress.o 00:02:25.544 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:25.544 CC test/thread/poller_perf/poller_perf.o 00:02:25.544 CC test/nvme/fdp/fdp.o 00:02:25.544 CC test/event/reactor_perf/reactor_perf.o 00:02:25.544 CC test/event/event_perf/event_perf.o 00:02:25.544 CC test/event/reactor/reactor.o 00:02:25.544 CC test/accel/dif/dif.o 00:02:25.544 CC test/thread/lock/spdk_lock.o 00:02:25.544 CC examples/sock/hello_world/hello_sock.o 00:02:25.544 CC examples/util/zipf/zipf.o 00:02:25.544 CC examples/accel/perf/accel_perf.o 00:02:25.544 CC examples/vmd/lsvmd/lsvmd.o 00:02:25.544 CC examples/nvme/abort/abort.o 00:02:25.544 CC examples/ioat/verify/verify.o 00:02:25.544 CC examples/nvme/reconnect/reconnect.o 00:02:25.544 CC examples/vmd/led/led.o 00:02:25.544 CC examples/nvme/hotplug/hotplug.o 00:02:25.544 CC examples/nvme/hello_world/hello_world.o 00:02:25.544 CC examples/ioat/perf/perf.o 00:02:25.544 CC examples/idxd/perf/perf.o 00:02:25.544 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:25.544 CC examples/nvme/arbitration/arbitration.o 00:02:25.544 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:25.544 CC test/event/app_repeat/app_repeat.o 00:02:25.544 CC app/fio/nvme/fio_plugin.o 00:02:25.544 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:25.544 CC test/blobfs/mkfs/mkfs.o 00:02:25.544 CC test/app/bdev_svc/bdev_svc.o 00:02:25.544 LINK spdk_lspci 00:02:25.544 CC examples/bdev/bdevperf/bdevperf.o 00:02:25.544 CC test/bdev/bdevio/bdevio.o 00:02:25.544 CC examples/bdev/hello_world/hello_bdev.o 00:02:25.544 CC test/dma/test_dma/test_dma.o 00:02:25.544 CC test/event/scheduler/scheduler.o 00:02:25.544 CC examples/thread/thread/thread_ex.o 00:02:25.544 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:25.544 LINK rpc_client_test 00:02:25.544 CC app/fio/bdev/fio_plugin.o 00:02:25.544 CC examples/nvmf/nvmf/nvmf.o 00:02:25.544 CC examples/blob/hello_world/hello_blob.o 00:02:25.544 CC examples/blob/cli/blobcli.o 00:02:25.544 CC test/lvol/esnap/esnap.o 00:02:25.544 CC test/env/mem_callbacks/mem_callbacks.o 00:02:25.807 LINK spdk_nvme_discover 00:02:25.807 CXX test/cpp_headers/ioat_spec.o 00:02:25.807 CXX test/cpp_headers/iscsi_spec.o 00:02:25.807 LINK jsoncat 00:02:25.807 CXX test/cpp_headers/json.o 00:02:25.807 CXX test/cpp_headers/jsonrpc.o 00:02:25.807 CXX test/cpp_headers/likely.o 00:02:25.807 CXX test/cpp_headers/log.o 00:02:25.807 CXX test/cpp_headers/lvol.o 00:02:25.807 CXX test/cpp_headers/memory.o 00:02:25.807 CXX test/cpp_headers/mmio.o 00:02:25.807 LINK histogram_perf 00:02:25.807 CXX test/cpp_headers/nbd.o 00:02:25.807 CXX test/cpp_headers/notify.o 00:02:25.807 CXX test/cpp_headers/nvme.o 00:02:25.807 LINK spdk_trace_record 00:02:25.807 CXX test/cpp_headers/nvme_intel.o 00:02:25.807 CXX test/cpp_headers/nvme_ocssd.o 00:02:25.807 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:25.807 CXX test/cpp_headers/nvme_spec.o 00:02:25.807 CXX test/cpp_headers/nvme_zns.o 00:02:25.807 CXX test/cpp_headers/nvmf_cmd.o 00:02:25.807 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:25.807 CXX test/cpp_headers/nvmf.o 00:02:25.807 CXX test/cpp_headers/nvmf_spec.o 00:02:25.807 CXX test/cpp_headers/nvmf_transport.o 00:02:25.807 LINK vtophys 00:02:25.807 CXX test/cpp_headers/opal.o 00:02:25.807 CXX test/cpp_headers/opal_spec.o 00:02:25.807 CXX test/cpp_headers/pci_ids.o 00:02:25.807 LINK nvmf_tgt 00:02:25.807 LINK poller_perf 00:02:25.807 CXX test/cpp_headers/pipe.o 00:02:25.807 LINK lsvmd 00:02:25.807 LINK reactor_perf 00:02:25.807 LINK reactor 00:02:25.807 LINK interrupt_tgt 00:02:25.807 CXX test/cpp_headers/queue.o 00:02:25.807 LINK event_perf 00:02:25.807 CXX test/cpp_headers/reduce.o 00:02:25.807 CXX test/cpp_headers/rpc.o 00:02:25.807 LINK env_dpdk_post_init 00:02:25.807 LINK led 00:02:25.807 CXX test/cpp_headers/scheduler.o 00:02:25.807 LINK vhost 00:02:25.807 LINK startup 00:02:25.807 LINK zipf 00:02:25.807 CXX test/cpp_headers/scsi.o 00:02:25.807 LINK stub 00:02:25.807 CXX test/cpp_headers/scsi_spec.o 00:02:25.807 LINK err_injection 00:02:25.807 LINK iscsi_tgt 00:02:25.807 CXX test/cpp_headers/sock.o 00:02:25.807 LINK boot_partition 00:02:25.807 LINK connect_stress 00:02:25.807 LINK app_repeat 00:02:25.807 LINK fused_ordering 00:02:25.807 LINK doorbell_aers 00:02:25.807 CXX test/cpp_headers/stdinc.o 00:02:25.807 LINK pmr_persistence 00:02:25.807 LINK reserve 00:02:25.807 LINK bdev_svc 00:02:25.807 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:25.807 LINK cmb_copy 00:02:25.807 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:25.807 LINK simple_copy 00:02:25.807 LINK spdk_tgt 00:02:25.807 LINK ioat_perf 00:02:25.807 LINK verify 00:02:25.807 LINK hello_sock 00:02:25.807 LINK hello_world 00:02:25.807 LINK reset 00:02:25.807 LINK hotplug 00:02:25.807 LINK aer 00:02:25.807 LINK mkfs 00:02:25.807 LINK nvme_dp 00:02:25.807 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:25.807 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:25.807 LINK fdp 00:02:25.807 LINK sgl 00:02:25.807 LINK overhead 00:02:25.807 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:25.807 CXX test/cpp_headers/string.o 00:02:25.807 LINK scheduler 00:02:25.807 CXX test/cpp_headers/thread.o 00:02:25.807 CXX test/cpp_headers/trace.o 00:02:25.807 CXX test/cpp_headers/trace_parser.o 00:02:26.069 CXX test/cpp_headers/tree.o 00:02:26.069 CXX test/cpp_headers/ublk.o 00:02:26.069 CXX test/cpp_headers/util.o 00:02:26.069 CXX test/cpp_headers/uuid.o 00:02:26.069 LINK hello_bdev 00:02:26.069 CXX test/cpp_headers/version.o 00:02:26.069 LINK hello_blob 00:02:26.069 CXX test/cpp_headers/vfio_user_pci.o 00:02:26.069 LINK thread 00:02:26.069 CXX test/cpp_headers/vfio_user_spec.o 00:02:26.069 CXX test/cpp_headers/vhost.o 00:02:26.069 CXX test/cpp_headers/vmd.o 00:02:26.069 CXX test/cpp_headers/xor.o 00:02:26.069 CXX test/cpp_headers/zipf.o 00:02:26.069 LINK idxd_perf 00:02:26.069 LINK spdk_trace 00:02:26.069 LINK reconnect 00:02:26.069 LINK nvmf 00:02:26.069 LINK arbitration 00:02:26.069 LINK abort 00:02:26.069 LINK dif 00:02:26.069 LINK bdevio 00:02:26.069 LINK test_dma 00:02:26.069 LINK spdk_dd 00:02:26.069 LINK nvme_fuzz 00:02:26.069 LINK nvme_compliance 00:02:26.069 LINK nvme_manage 00:02:26.069 LINK pci_ut 00:02:26.326 LINK llvm_vfio_fuzz 00:02:26.326 LINK accel_perf 00:02:26.326 LINK blobcli 00:02:26.326 LINK spdk_nvme 00:02:26.326 LINK mem_callbacks 00:02:26.326 LINK spdk_bdev 00:02:26.326 LINK spdk_top 00:02:26.583 LINK spdk_nvme_identify 00:02:26.583 LINK llvm_nvme_fuzz 00:02:26.583 LINK vhost_fuzz 00:02:26.583 LINK memory_ut 00:02:26.583 LINK spdk_nvme_perf 00:02:26.583 LINK bdevperf 00:02:26.840 LINK cuse 00:02:27.099 LINK spdk_lock 00:02:27.357 LINK iscsi_fuzz 00:02:29.264 LINK esnap 00:02:29.264 00:02:29.264 real 0m41.895s 00:02:29.264 user 5m43.245s 00:02:29.264 sys 2m47.493s 00:02:29.264 10:40:18 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:29.264 10:40:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.264 ************************************ 00:02:29.264 END TEST make 00:02:29.264 ************************************ 00:02:29.524 10:40:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:29.524 10:40:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:29.524 10:40:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:29.524 10:40:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:29.524 10:40:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:29.524 10:40:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:29.524 10:40:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:29.524 10:40:18 -- scripts/common.sh@335 -- # IFS=.-: 00:02:29.524 10:40:18 -- scripts/common.sh@335 -- # read -ra ver1 00:02:29.524 10:40:18 -- scripts/common.sh@336 -- # IFS=.-: 00:02:29.524 10:40:18 -- scripts/common.sh@336 -- # read -ra ver2 00:02:29.524 10:40:18 -- scripts/common.sh@337 -- # local 'op=<' 00:02:29.524 10:40:18 -- scripts/common.sh@339 -- # ver1_l=2 00:02:29.524 10:40:18 -- scripts/common.sh@340 -- # ver2_l=1 00:02:29.524 10:40:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:29.524 10:40:18 -- scripts/common.sh@343 -- # case "$op" in 00:02:29.524 10:40:18 -- scripts/common.sh@344 -- # : 1 00:02:29.524 10:40:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:29.524 10:40:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:29.524 10:40:18 -- scripts/common.sh@364 -- # decimal 1 00:02:29.524 10:40:18 -- scripts/common.sh@352 -- # local d=1 00:02:29.524 10:40:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:29.524 10:40:18 -- scripts/common.sh@354 -- # echo 1 00:02:29.524 10:40:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:29.524 10:40:18 -- scripts/common.sh@365 -- # decimal 2 00:02:29.524 10:40:18 -- scripts/common.sh@352 -- # local d=2 00:02:29.524 10:40:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:29.524 10:40:18 -- scripts/common.sh@354 -- # echo 2 00:02:29.524 10:40:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:29.524 10:40:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:29.524 10:40:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:29.524 10:40:18 -- scripts/common.sh@367 -- # return 0 00:02:29.524 10:40:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:29.524 10:40:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:29.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:29.524 --rc genhtml_branch_coverage=1 00:02:29.524 --rc genhtml_function_coverage=1 00:02:29.524 --rc genhtml_legend=1 00:02:29.524 --rc geninfo_all_blocks=1 00:02:29.524 --rc geninfo_unexecuted_blocks=1 00:02:29.524 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:29.524 ' 00:02:29.524 10:40:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:29.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:29.524 --rc genhtml_branch_coverage=1 00:02:29.524 --rc genhtml_function_coverage=1 00:02:29.524 --rc genhtml_legend=1 00:02:29.524 --rc geninfo_all_blocks=1 00:02:29.524 --rc geninfo_unexecuted_blocks=1 00:02:29.524 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:29.524 ' 00:02:29.524 10:40:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:29.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:29.524 --rc genhtml_branch_coverage=1 00:02:29.524 --rc genhtml_function_coverage=1 00:02:29.524 --rc genhtml_legend=1 00:02:29.524 --rc geninfo_all_blocks=1 00:02:29.524 --rc geninfo_unexecuted_blocks=1 00:02:29.524 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:29.524 ' 00:02:29.524 10:40:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:29.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:29.525 --rc genhtml_branch_coverage=1 00:02:29.525 --rc genhtml_function_coverage=1 00:02:29.525 --rc genhtml_legend=1 00:02:29.525 --rc geninfo_all_blocks=1 00:02:29.525 --rc geninfo_unexecuted_blocks=1 00:02:29.525 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:29.525 ' 00:02:29.525 10:40:18 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:29.525 10:40:18 -- nvmf/common.sh@7 -- # uname -s 00:02:29.525 10:40:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:29.525 10:40:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:29.525 10:40:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:29.525 10:40:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:29.525 10:40:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:29.525 10:40:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:29.525 10:40:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:29.525 10:40:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:29.525 10:40:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:29.525 10:40:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:29.525 10:40:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:02:29.525 10:40:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:02:29.525 10:40:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:29.525 10:40:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:29.525 10:40:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:29.525 10:40:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:29.525 10:40:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:29.525 10:40:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:29.525 10:40:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:29.525 10:40:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.525 10:40:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.525 10:40:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.525 10:40:18 -- paths/export.sh@5 -- # export PATH 00:02:29.525 10:40:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.525 10:40:18 -- nvmf/common.sh@46 -- # : 0 00:02:29.525 10:40:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:29.525 10:40:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:29.525 10:40:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:29.525 10:40:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:29.525 10:40:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:29.525 10:40:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:29.525 10:40:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:29.525 10:40:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:29.525 10:40:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:29.525 10:40:18 -- spdk/autotest.sh@32 -- # uname -s 00:02:29.525 10:40:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:29.525 10:40:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:29.525 10:40:18 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:29.525 10:40:18 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:29.525 10:40:18 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:29.525 10:40:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:29.525 10:40:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:29.525 10:40:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:29.525 10:40:18 -- spdk/autotest.sh@48 -- # udevadm_pid=1233039 00:02:29.525 10:40:18 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:02:29.525 10:40:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:29.525 10:40:18 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:02:29.525 10:40:18 -- spdk/autotest.sh@54 -- # echo 1233041 00:02:29.525 10:40:18 -- spdk/autotest.sh@56 -- # echo 1233042 00:02:29.525 10:40:18 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:02:29.525 10:40:18 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:29.525 10:40:18 -- spdk/autotest.sh@60 -- # echo 1233043 00:02:29.525 10:40:18 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l 00:02:29.525 10:40:18 -- spdk/autotest.sh@62 -- # echo 1233045 00:02:29.525 10:40:18 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l 00:02:29.525 10:40:18 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:29.525 10:40:18 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:29.525 10:40:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:29.525 10:40:18 -- common/autotest_common.sh@10 -- # set +x 00:02:29.784 10:40:18 -- spdk/autotest.sh@70 -- # create_test_list 00:02:29.784 10:40:18 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:29.784 10:40:18 -- common/autotest_common.sh@10 -- # set +x 00:02:29.784 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:29.784 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:29.784 10:40:18 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:29.784 10:40:18 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:29.784 10:40:18 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:29.784 10:40:18 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:29.784 10:40:18 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:29.784 10:40:18 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:29.784 10:40:18 -- common/autotest_common.sh@1450 -- # uname 00:02:29.784 10:40:18 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:02:29.784 10:40:18 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:29.784 10:40:18 -- common/autotest_common.sh@1470 -- # uname 00:02:29.784 10:40:18 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:02:29.784 10:40:18 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:02:29.784 10:40:18 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:02:29.784 lcov: LCOV version 1.15 00:02:29.784 10:40:18 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:02:31.692 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:31.692 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:31.692 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:43.907 10:40:32 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:02:43.907 10:40:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:43.907 10:40:32 -- common/autotest_common.sh@10 -- # set +x 00:02:43.907 10:40:32 -- spdk/autotest.sh@89 -- # rm -f 00:02:43.907 10:40:32 -- spdk/autotest.sh@92 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.198 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:47.198 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:47.198 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:47.198 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:47.198 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:47.198 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:47.198 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:47.198 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:47.198 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:47.198 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:47.198 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:47.198 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:47.198 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:47.198 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:47.198 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:47.198 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:47.198 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:47.198 10:40:36 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:02:47.198 10:40:36 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:02:47.198 10:40:36 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:02:47.198 10:40:36 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:02:47.198 10:40:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:47.198 10:40:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:02:47.198 10:40:36 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:02:47.198 10:40:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:47.198 10:40:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:47.198 10:40:36 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:02:47.198 10:40:36 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:02:47.198 10:40:36 -- spdk/autotest.sh@108 -- # grep -v p 00:02:47.198 10:40:36 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:47.198 10:40:36 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:47.198 10:40:36 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:02:47.198 10:40:36 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:47.198 10:40:36 -- scripts/common.sh@389 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:47.198 No valid GPT data, bailing 00:02:47.198 10:40:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:47.198 10:40:36 -- scripts/common.sh@393 -- # pt= 00:02:47.198 10:40:36 -- scripts/common.sh@394 -- # return 1 00:02:47.198 10:40:36 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:47.198 1+0 records in 00:02:47.198 1+0 records out 00:02:47.198 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00564853 s, 186 MB/s 00:02:47.198 10:40:36 -- spdk/autotest.sh@116 -- # sync 00:02:47.198 10:40:36 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:47.198 10:40:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:47.198 10:40:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:53.769 10:40:42 -- spdk/autotest.sh@122 -- # uname -s 00:02:53.769 10:40:42 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:02:53.769 10:40:42 -- spdk/autotest.sh@123 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:53.769 10:40:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:53.769 10:40:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:53.769 10:40:42 -- common/autotest_common.sh@10 -- # set +x 00:02:53.769 ************************************ 00:02:53.769 START TEST setup.sh 00:02:53.769 ************************************ 00:02:53.769 10:40:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:53.769 * Looking for test storage... 00:02:53.769 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:53.769 10:40:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:53.769 10:40:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:53.769 10:40:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:53.769 10:40:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:53.769 10:40:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:53.769 10:40:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:53.769 10:40:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:53.769 10:40:42 -- scripts/common.sh@335 -- # IFS=.-: 00:02:53.769 10:40:42 -- scripts/common.sh@335 -- # read -ra ver1 00:02:53.769 10:40:42 -- scripts/common.sh@336 -- # IFS=.-: 00:02:53.769 10:40:42 -- scripts/common.sh@336 -- # read -ra ver2 00:02:53.769 10:40:42 -- scripts/common.sh@337 -- # local 'op=<' 00:02:53.769 10:40:42 -- scripts/common.sh@339 -- # ver1_l=2 00:02:53.769 10:40:42 -- scripts/common.sh@340 -- # ver2_l=1 00:02:53.769 10:40:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:53.769 10:40:42 -- scripts/common.sh@343 -- # case "$op" in 00:02:53.769 10:40:42 -- scripts/common.sh@344 -- # : 1 00:02:53.769 10:40:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:53.769 10:40:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:53.769 10:40:42 -- scripts/common.sh@364 -- # decimal 1 00:02:53.769 10:40:42 -- scripts/common.sh@352 -- # local d=1 00:02:53.769 10:40:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:53.769 10:40:42 -- scripts/common.sh@354 -- # echo 1 00:02:53.769 10:40:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:53.769 10:40:42 -- scripts/common.sh@365 -- # decimal 2 00:02:53.769 10:40:42 -- scripts/common.sh@352 -- # local d=2 00:02:53.769 10:40:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:53.769 10:40:42 -- scripts/common.sh@354 -- # echo 2 00:02:53.769 10:40:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:53.769 10:40:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:53.769 10:40:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:53.769 10:40:42 -- scripts/common.sh@367 -- # return 0 00:02:53.769 10:40:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:53.769 10:40:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:53.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.769 --rc genhtml_branch_coverage=1 00:02:53.769 --rc genhtml_function_coverage=1 00:02:53.769 --rc genhtml_legend=1 00:02:53.769 --rc geninfo_all_blocks=1 00:02:53.769 --rc geninfo_unexecuted_blocks=1 00:02:53.770 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:53.770 ' 00:02:53.770 10:40:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:53.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.770 --rc genhtml_branch_coverage=1 00:02:53.770 --rc genhtml_function_coverage=1 00:02:53.770 --rc genhtml_legend=1 00:02:53.770 --rc geninfo_all_blocks=1 00:02:53.770 --rc geninfo_unexecuted_blocks=1 00:02:53.770 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:53.770 ' 00:02:53.770 10:40:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:53.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.770 --rc genhtml_branch_coverage=1 00:02:53.770 --rc genhtml_function_coverage=1 00:02:53.770 --rc genhtml_legend=1 00:02:53.770 --rc geninfo_all_blocks=1 00:02:53.770 --rc geninfo_unexecuted_blocks=1 00:02:53.770 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:53.770 ' 00:02:53.770 10:40:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:53.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.770 --rc genhtml_branch_coverage=1 00:02:53.770 --rc genhtml_function_coverage=1 00:02:53.770 --rc genhtml_legend=1 00:02:53.770 --rc geninfo_all_blocks=1 00:02:53.770 --rc geninfo_unexecuted_blocks=1 00:02:53.770 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:53.770 ' 00:02:53.770 10:40:42 -- setup/test-setup.sh@10 -- # uname -s 00:02:53.770 10:40:42 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:53.770 10:40:42 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:53.770 10:40:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:53.770 10:40:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:53.770 10:40:42 -- common/autotest_common.sh@10 -- # set +x 00:02:53.770 ************************************ 00:02:53.770 START TEST acl 00:02:53.770 ************************************ 00:02:53.770 10:40:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:53.770 * Looking for test storage... 00:02:53.770 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:53.770 10:40:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:53.770 10:40:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:53.770 10:40:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:54.029 10:40:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:54.029 10:40:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:54.029 10:40:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:54.029 10:40:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:54.029 10:40:42 -- scripts/common.sh@335 -- # IFS=.-: 00:02:54.029 10:40:42 -- scripts/common.sh@335 -- # read -ra ver1 00:02:54.029 10:40:42 -- scripts/common.sh@336 -- # IFS=.-: 00:02:54.029 10:40:42 -- scripts/common.sh@336 -- # read -ra ver2 00:02:54.029 10:40:42 -- scripts/common.sh@337 -- # local 'op=<' 00:02:54.029 10:40:42 -- scripts/common.sh@339 -- # ver1_l=2 00:02:54.029 10:40:42 -- scripts/common.sh@340 -- # ver2_l=1 00:02:54.029 10:40:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:54.029 10:40:42 -- scripts/common.sh@343 -- # case "$op" in 00:02:54.029 10:40:42 -- scripts/common.sh@344 -- # : 1 00:02:54.029 10:40:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:54.029 10:40:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.029 10:40:42 -- scripts/common.sh@364 -- # decimal 1 00:02:54.029 10:40:42 -- scripts/common.sh@352 -- # local d=1 00:02:54.029 10:40:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:54.029 10:40:42 -- scripts/common.sh@354 -- # echo 1 00:02:54.029 10:40:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:54.029 10:40:42 -- scripts/common.sh@365 -- # decimal 2 00:02:54.029 10:40:42 -- scripts/common.sh@352 -- # local d=2 00:02:54.029 10:40:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:54.029 10:40:42 -- scripts/common.sh@354 -- # echo 2 00:02:54.029 10:40:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:54.029 10:40:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:54.030 10:40:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:54.030 10:40:42 -- scripts/common.sh@367 -- # return 0 00:02:54.030 10:40:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:54.030 10:40:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:54.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.030 --rc genhtml_branch_coverage=1 00:02:54.030 --rc genhtml_function_coverage=1 00:02:54.030 --rc genhtml_legend=1 00:02:54.030 --rc geninfo_all_blocks=1 00:02:54.030 --rc geninfo_unexecuted_blocks=1 00:02:54.030 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:54.030 ' 00:02:54.030 10:40:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:54.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.030 --rc genhtml_branch_coverage=1 00:02:54.030 --rc genhtml_function_coverage=1 00:02:54.030 --rc genhtml_legend=1 00:02:54.030 --rc geninfo_all_blocks=1 00:02:54.030 --rc geninfo_unexecuted_blocks=1 00:02:54.030 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:54.030 ' 00:02:54.030 10:40:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:54.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.030 --rc genhtml_branch_coverage=1 00:02:54.030 --rc genhtml_function_coverage=1 00:02:54.030 --rc genhtml_legend=1 00:02:54.030 --rc geninfo_all_blocks=1 00:02:54.030 --rc geninfo_unexecuted_blocks=1 00:02:54.030 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:54.030 ' 00:02:54.030 10:40:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:54.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.030 --rc genhtml_branch_coverage=1 00:02:54.030 --rc genhtml_function_coverage=1 00:02:54.030 --rc genhtml_legend=1 00:02:54.030 --rc geninfo_all_blocks=1 00:02:54.030 --rc geninfo_unexecuted_blocks=1 00:02:54.030 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:54.030 ' 00:02:54.030 10:40:42 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:54.030 10:40:42 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:02:54.030 10:40:42 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:02:54.030 10:40:42 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:02:54.030 10:40:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:54.030 10:40:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:02:54.030 10:40:42 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:02:54.030 10:40:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:54.030 10:40:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:54.030 10:40:42 -- setup/acl.sh@12 -- # devs=() 00:02:54.030 10:40:42 -- setup/acl.sh@12 -- # declare -a devs 00:02:54.030 10:40:42 -- setup/acl.sh@13 -- # drivers=() 00:02:54.030 10:40:42 -- setup/acl.sh@13 -- # declare -A drivers 00:02:54.030 10:40:42 -- setup/acl.sh@51 -- # setup reset 00:02:54.030 10:40:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:54.030 10:40:42 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:58.226 10:40:46 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:58.226 10:40:46 -- setup/acl.sh@16 -- # local dev driver 00:02:58.226 10:40:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.226 10:40:46 -- setup/acl.sh@15 -- # setup output status 00:02:58.226 10:40:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.226 10:40:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:00.762 Hugepages 00:03:00.762 node hugesize free / total 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 00:03:00.762 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # continue 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:00.762 10:40:49 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:00.762 10:40:49 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:00.762 10:40:49 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:00.762 10:40:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.762 10:40:49 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:00.762 10:40:49 -- setup/acl.sh@54 -- # run_test denied denied 00:03:00.762 10:40:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:00.762 10:40:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:00.762 10:40:49 -- common/autotest_common.sh@10 -- # set +x 00:03:00.763 ************************************ 00:03:00.763 START TEST denied 00:03:00.763 ************************************ 00:03:00.763 10:40:49 -- common/autotest_common.sh@1114 -- # denied 00:03:00.763 10:40:49 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:00.763 10:40:49 -- setup/acl.sh@38 -- # setup output config 00:03:00.763 10:40:49 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:00.763 10:40:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.763 10:40:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:04.957 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:04.957 10:40:53 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:04.957 10:40:53 -- setup/acl.sh@28 -- # local dev driver 00:03:04.957 10:40:53 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:04.957 10:40:53 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:04.957 10:40:53 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:04.957 10:40:53 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:04.957 10:40:53 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:04.957 10:40:53 -- setup/acl.sh@41 -- # setup reset 00:03:04.957 10:40:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:04.957 10:40:53 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.151 00:03:09.151 real 0m8.104s 00:03:09.151 user 0m2.663s 00:03:09.151 sys 0m4.821s 00:03:09.151 10:40:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:09.151 10:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:09.151 ************************************ 00:03:09.151 END TEST denied 00:03:09.151 ************************************ 00:03:09.151 10:40:57 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:09.151 10:40:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:09.151 10:40:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:09.151 10:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:09.151 ************************************ 00:03:09.151 START TEST allowed 00:03:09.151 ************************************ 00:03:09.151 10:40:57 -- common/autotest_common.sh@1114 -- # allowed 00:03:09.151 10:40:57 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:09.151 10:40:57 -- setup/acl.sh@45 -- # setup output config 00:03:09.151 10:40:57 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:09.151 10:40:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.151 10:40:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:14.463 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:14.463 10:41:02 -- setup/acl.sh@47 -- # verify 00:03:14.463 10:41:02 -- setup/acl.sh@28 -- # local dev driver 00:03:14.463 10:41:02 -- setup/acl.sh@48 -- # setup reset 00:03:14.463 10:41:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:14.463 10:41:02 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.753 00:03:17.753 real 0m8.736s 00:03:17.753 user 0m2.520s 00:03:17.753 sys 0m4.777s 00:03:17.753 10:41:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:17.753 10:41:06 -- common/autotest_common.sh@10 -- # set +x 00:03:17.753 ************************************ 00:03:17.753 END TEST allowed 00:03:17.753 ************************************ 00:03:17.753 00:03:17.753 real 0m23.921s 00:03:17.753 user 0m7.812s 00:03:17.753 sys 0m14.342s 00:03:17.753 10:41:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:17.753 10:41:06 -- common/autotest_common.sh@10 -- # set +x 00:03:17.753 ************************************ 00:03:17.753 END TEST acl 00:03:17.753 ************************************ 00:03:17.753 10:41:06 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:17.753 10:41:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:17.753 10:41:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:17.753 10:41:06 -- common/autotest_common.sh@10 -- # set +x 00:03:17.753 ************************************ 00:03:17.753 START TEST hugepages 00:03:17.753 ************************************ 00:03:17.753 10:41:06 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:17.753 * Looking for test storage... 00:03:17.753 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:17.753 10:41:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:17.753 10:41:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:17.753 10:41:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:18.013 10:41:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:18.013 10:41:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:18.013 10:41:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:18.013 10:41:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:18.013 10:41:06 -- scripts/common.sh@335 -- # IFS=.-: 00:03:18.013 10:41:06 -- scripts/common.sh@335 -- # read -ra ver1 00:03:18.013 10:41:06 -- scripts/common.sh@336 -- # IFS=.-: 00:03:18.013 10:41:06 -- scripts/common.sh@336 -- # read -ra ver2 00:03:18.013 10:41:06 -- scripts/common.sh@337 -- # local 'op=<' 00:03:18.013 10:41:06 -- scripts/common.sh@339 -- # ver1_l=2 00:03:18.013 10:41:06 -- scripts/common.sh@340 -- # ver2_l=1 00:03:18.013 10:41:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:18.013 10:41:06 -- scripts/common.sh@343 -- # case "$op" in 00:03:18.013 10:41:06 -- scripts/common.sh@344 -- # : 1 00:03:18.013 10:41:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:18.013 10:41:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:18.013 10:41:06 -- scripts/common.sh@364 -- # decimal 1 00:03:18.013 10:41:06 -- scripts/common.sh@352 -- # local d=1 00:03:18.013 10:41:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:18.013 10:41:06 -- scripts/common.sh@354 -- # echo 1 00:03:18.013 10:41:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:18.013 10:41:06 -- scripts/common.sh@365 -- # decimal 2 00:03:18.013 10:41:06 -- scripts/common.sh@352 -- # local d=2 00:03:18.013 10:41:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:18.013 10:41:06 -- scripts/common.sh@354 -- # echo 2 00:03:18.013 10:41:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:18.013 10:41:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:18.013 10:41:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:18.013 10:41:06 -- scripts/common.sh@367 -- # return 0 00:03:18.013 10:41:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:18.013 10:41:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:18.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.013 --rc genhtml_branch_coverage=1 00:03:18.013 --rc genhtml_function_coverage=1 00:03:18.013 --rc genhtml_legend=1 00:03:18.013 --rc geninfo_all_blocks=1 00:03:18.013 --rc geninfo_unexecuted_blocks=1 00:03:18.013 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:18.013 ' 00:03:18.013 10:41:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:18.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.013 --rc genhtml_branch_coverage=1 00:03:18.013 --rc genhtml_function_coverage=1 00:03:18.013 --rc genhtml_legend=1 00:03:18.013 --rc geninfo_all_blocks=1 00:03:18.013 --rc geninfo_unexecuted_blocks=1 00:03:18.013 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:18.013 ' 00:03:18.013 10:41:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:18.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.013 --rc genhtml_branch_coverage=1 00:03:18.013 --rc genhtml_function_coverage=1 00:03:18.014 --rc genhtml_legend=1 00:03:18.014 --rc geninfo_all_blocks=1 00:03:18.014 --rc geninfo_unexecuted_blocks=1 00:03:18.014 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:18.014 ' 00:03:18.014 10:41:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:18.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.014 --rc genhtml_branch_coverage=1 00:03:18.014 --rc genhtml_function_coverage=1 00:03:18.014 --rc genhtml_legend=1 00:03:18.014 --rc geninfo_all_blocks=1 00:03:18.014 --rc geninfo_unexecuted_blocks=1 00:03:18.014 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:18.014 ' 00:03:18.014 10:41:06 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:18.014 10:41:06 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:18.014 10:41:06 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:18.014 10:41:06 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:18.014 10:41:06 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:18.014 10:41:06 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:18.014 10:41:06 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:18.014 10:41:06 -- setup/common.sh@18 -- # local node= 00:03:18.014 10:41:06 -- setup/common.sh@19 -- # local var val 00:03:18.014 10:41:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.014 10:41:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.014 10:41:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.014 10:41:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.014 10:41:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.014 10:41:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 40919408 kB' 'MemAvailable: 44638372 kB' 'Buffers: 9316 kB' 'Cached: 11109028 kB' 'SwapCached: 0 kB' 'Active: 7884020 kB' 'Inactive: 3689320 kB' 'Active(anon): 7466340 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458392 kB' 'Mapped: 167620 kB' 'Shmem: 7011344 kB' 'KReclaimable: 217660 kB' 'Slab: 916140 kB' 'SReclaimable: 217660 kB' 'SUnreclaim: 698480 kB' 'KernelStack: 21824 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433344 kB' 'Committed_AS: 8621596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214208 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.014 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.014 10:41:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # continue 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.015 10:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.015 10:41:06 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.015 10:41:06 -- setup/common.sh@33 -- # echo 2048 00:03:18.015 10:41:06 -- setup/common.sh@33 -- # return 0 00:03:18.015 10:41:06 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:18.015 10:41:06 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:18.015 10:41:06 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:18.015 10:41:06 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:18.015 10:41:06 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:18.015 10:41:06 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:18.015 10:41:06 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:18.015 10:41:06 -- setup/hugepages.sh@207 -- # get_nodes 00:03:18.015 10:41:06 -- setup/hugepages.sh@27 -- # local node 00:03:18.015 10:41:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.015 10:41:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:18.015 10:41:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.015 10:41:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:18.015 10:41:06 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.015 10:41:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.015 10:41:06 -- setup/hugepages.sh@208 -- # clear_hp 00:03:18.015 10:41:06 -- setup/hugepages.sh@37 -- # local node hp 00:03:18.015 10:41:06 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.015 10:41:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.015 10:41:06 -- setup/hugepages.sh@41 -- # echo 0 00:03:18.015 10:41:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.015 10:41:06 -- setup/hugepages.sh@41 -- # echo 0 00:03:18.015 10:41:06 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.015 10:41:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.015 10:41:06 -- setup/hugepages.sh@41 -- # echo 0 00:03:18.015 10:41:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.015 10:41:06 -- setup/hugepages.sh@41 -- # echo 0 00:03:18.015 10:41:06 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:18.015 10:41:06 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:18.015 10:41:06 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:18.015 10:41:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:18.015 10:41:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:18.015 10:41:06 -- common/autotest_common.sh@10 -- # set +x 00:03:18.015 ************************************ 00:03:18.015 START TEST default_setup 00:03:18.015 ************************************ 00:03:18.015 10:41:06 -- common/autotest_common.sh@1114 -- # default_setup 00:03:18.015 10:41:06 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:18.015 10:41:06 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.015 10:41:06 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:18.015 10:41:06 -- setup/hugepages.sh@51 -- # shift 00:03:18.015 10:41:06 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:18.015 10:41:06 -- setup/hugepages.sh@52 -- # local node_ids 00:03:18.015 10:41:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.015 10:41:06 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.015 10:41:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:18.015 10:41:06 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:18.015 10:41:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.015 10:41:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.015 10:41:06 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.015 10:41:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.015 10:41:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.015 10:41:06 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:18.015 10:41:06 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:18.015 10:41:06 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:18.015 10:41:06 -- setup/hugepages.sh@73 -- # return 0 00:03:18.016 10:41:06 -- setup/hugepages.sh@137 -- # setup output 00:03:18.016 10:41:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.016 10:41:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:21.302 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:21.302 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:21.302 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:21.302 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:21.303 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:21.303 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:21.303 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:21.303 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:21.303 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:21.303 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:21.303 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:21.303 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:21.303 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:21.303 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:21.303 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:21.303 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:22.681 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:22.944 10:41:11 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:22.944 10:41:11 -- setup/hugepages.sh@89 -- # local node 00:03:22.944 10:41:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.944 10:41:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.944 10:41:11 -- setup/hugepages.sh@92 -- # local surp 00:03:22.944 10:41:11 -- setup/hugepages.sh@93 -- # local resv 00:03:22.944 10:41:11 -- setup/hugepages.sh@94 -- # local anon 00:03:22.944 10:41:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.944 10:41:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.944 10:41:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.945 10:41:11 -- setup/common.sh@18 -- # local node= 00:03:22.945 10:41:11 -- setup/common.sh@19 -- # local var val 00:03:22.945 10:41:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.945 10:41:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.945 10:41:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.945 10:41:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.945 10:41:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.945 10:41:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43128224 kB' 'MemAvailable: 46846828 kB' 'Buffers: 9316 kB' 'Cached: 11109156 kB' 'SwapCached: 0 kB' 'Active: 7882940 kB' 'Inactive: 3689320 kB' 'Active(anon): 7465260 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457216 kB' 'Mapped: 166820 kB' 'Shmem: 7011472 kB' 'KReclaimable: 216940 kB' 'Slab: 914120 kB' 'SReclaimable: 216940 kB' 'SUnreclaim: 697180 kB' 'KernelStack: 21824 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8620612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214208 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.945 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.945 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.946 10:41:11 -- setup/common.sh@33 -- # echo 0 00:03:22.946 10:41:11 -- setup/common.sh@33 -- # return 0 00:03:22.946 10:41:11 -- setup/hugepages.sh@97 -- # anon=0 00:03:22.946 10:41:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.946 10:41:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.946 10:41:11 -- setup/common.sh@18 -- # local node= 00:03:22.946 10:41:11 -- setup/common.sh@19 -- # local var val 00:03:22.946 10:41:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.946 10:41:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.946 10:41:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.946 10:41:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.946 10:41:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.946 10:41:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43130188 kB' 'MemAvailable: 46848792 kB' 'Buffers: 9316 kB' 'Cached: 11109156 kB' 'SwapCached: 0 kB' 'Active: 7883608 kB' 'Inactive: 3689320 kB' 'Active(anon): 7465928 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457840 kB' 'Mapped: 166820 kB' 'Shmem: 7011472 kB' 'KReclaimable: 216940 kB' 'Slab: 914088 kB' 'SReclaimable: 216940 kB' 'SUnreclaim: 697148 kB' 'KernelStack: 21760 kB' 'PageTables: 7524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8620624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214240 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.946 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.946 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.947 10:41:11 -- setup/common.sh@33 -- # echo 0 00:03:22.947 10:41:11 -- setup/common.sh@33 -- # return 0 00:03:22.947 10:41:11 -- setup/hugepages.sh@99 -- # surp=0 00:03:22.947 10:41:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.947 10:41:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.947 10:41:11 -- setup/common.sh@18 -- # local node= 00:03:22.947 10:41:11 -- setup/common.sh@19 -- # local var val 00:03:22.947 10:41:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.947 10:41:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.947 10:41:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.947 10:41:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.947 10:41:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.947 10:41:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43129336 kB' 'MemAvailable: 46847940 kB' 'Buffers: 9316 kB' 'Cached: 11109164 kB' 'SwapCached: 0 kB' 'Active: 7883204 kB' 'Inactive: 3689320 kB' 'Active(anon): 7465524 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457352 kB' 'Mapped: 166828 kB' 'Shmem: 7011480 kB' 'KReclaimable: 216940 kB' 'Slab: 914024 kB' 'SReclaimable: 216940 kB' 'SUnreclaim: 697084 kB' 'KernelStack: 22032 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8620636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214352 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.947 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.947 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.948 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.948 10:41:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.948 10:41:11 -- setup/common.sh@33 -- # echo 0 00:03:22.948 10:41:11 -- setup/common.sh@33 -- # return 0 00:03:22.948 10:41:11 -- setup/hugepages.sh@100 -- # resv=0 00:03:22.948 10:41:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.948 nr_hugepages=1024 00:03:22.948 10:41:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.948 resv_hugepages=0 00:03:22.948 10:41:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.948 surplus_hugepages=0 00:03:22.948 10:41:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.948 anon_hugepages=0 00:03:22.948 10:41:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.948 10:41:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.948 10:41:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.949 10:41:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.949 10:41:11 -- setup/common.sh@18 -- # local node= 00:03:22.949 10:41:11 -- setup/common.sh@19 -- # local var val 00:03:22.949 10:41:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.949 10:41:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.949 10:41:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.949 10:41:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.949 10:41:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.949 10:41:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43129496 kB' 'MemAvailable: 46848100 kB' 'Buffers: 9316 kB' 'Cached: 11109164 kB' 'SwapCached: 0 kB' 'Active: 7883084 kB' 'Inactive: 3689320 kB' 'Active(anon): 7465404 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457204 kB' 'Mapped: 166724 kB' 'Shmem: 7011480 kB' 'KReclaimable: 216940 kB' 'Slab: 913948 kB' 'SReclaimable: 216940 kB' 'SUnreclaim: 697008 kB' 'KernelStack: 21968 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8620652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214352 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.949 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.949 10:41:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.950 10:41:11 -- setup/common.sh@33 -- # echo 1024 00:03:22.950 10:41:11 -- setup/common.sh@33 -- # return 0 00:03:22.950 10:41:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.950 10:41:11 -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.950 10:41:11 -- setup/hugepages.sh@27 -- # local node 00:03:22.950 10:41:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.950 10:41:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:22.950 10:41:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.950 10:41:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:22.950 10:41:11 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.950 10:41:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.950 10:41:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.950 10:41:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.950 10:41:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.950 10:41:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.950 10:41:11 -- setup/common.sh@18 -- # local node=0 00:03:22.950 10:41:11 -- setup/common.sh@19 -- # local var val 00:03:22.950 10:41:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.950 10:41:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.950 10:41:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.950 10:41:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.950 10:41:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.950 10:41:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 25974768 kB' 'MemUsed: 6610600 kB' 'SwapCached: 0 kB' 'Active: 2892140 kB' 'Inactive: 176724 kB' 'Active(anon): 2705972 kB' 'Inactive(anon): 0 kB' 'Active(file): 186168 kB' 'Inactive(file): 176724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2681892 kB' 'Mapped: 71044 kB' 'AnonPages: 390120 kB' 'Shmem: 2319000 kB' 'KernelStack: 12792 kB' 'PageTables: 5672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 78632 kB' 'Slab: 391060 kB' 'SReclaimable: 78632 kB' 'SUnreclaim: 312428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.950 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.950 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # continue 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.951 10:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.951 10:41:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.951 10:41:11 -- setup/common.sh@33 -- # echo 0 00:03:22.951 10:41:11 -- setup/common.sh@33 -- # return 0 00:03:22.951 10:41:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.951 10:41:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.951 10:41:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.951 10:41:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.951 10:41:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:22.951 node0=1024 expecting 1024 00:03:22.951 10:41:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:22.951 00:03:22.951 real 0m4.997s 00:03:22.951 user 0m1.260s 00:03:22.951 sys 0m2.312s 00:03:22.951 10:41:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:22.951 10:41:11 -- common/autotest_common.sh@10 -- # set +x 00:03:22.951 ************************************ 00:03:22.951 END TEST default_setup 00:03:22.951 ************************************ 00:03:22.951 10:41:11 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:22.951 10:41:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:22.951 10:41:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:22.951 10:41:11 -- common/autotest_common.sh@10 -- # set +x 00:03:22.951 ************************************ 00:03:22.951 START TEST per_node_1G_alloc 00:03:22.951 ************************************ 00:03:22.951 10:41:11 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:22.951 10:41:11 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:22.951 10:41:11 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:22.951 10:41:11 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:22.951 10:41:11 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:22.951 10:41:11 -- setup/hugepages.sh@51 -- # shift 00:03:22.951 10:41:11 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:22.951 10:41:11 -- setup/hugepages.sh@52 -- # local node_ids 00:03:22.951 10:41:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.951 10:41:11 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:22.951 10:41:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:22.951 10:41:11 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:22.951 10:41:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.951 10:41:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:22.951 10:41:11 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.951 10:41:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.951 10:41:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.951 10:41:11 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:22.951 10:41:11 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:22.951 10:41:11 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:22.951 10:41:11 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:22.951 10:41:11 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:22.951 10:41:11 -- setup/hugepages.sh@73 -- # return 0 00:03:22.951 10:41:11 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:22.951 10:41:11 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:22.951 10:41:11 -- setup/hugepages.sh@146 -- # setup output 00:03:22.951 10:41:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.951 10:41:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:26.242 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.242 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.242 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.242 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.242 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.242 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.242 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.242 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.242 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.242 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.242 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.242 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.242 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.242 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.242 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.242 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.242 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.505 10:41:15 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:26.505 10:41:15 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:26.505 10:41:15 -- setup/hugepages.sh@89 -- # local node 00:03:26.505 10:41:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.505 10:41:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.505 10:41:15 -- setup/hugepages.sh@92 -- # local surp 00:03:26.505 10:41:15 -- setup/hugepages.sh@93 -- # local resv 00:03:26.505 10:41:15 -- setup/hugepages.sh@94 -- # local anon 00:03:26.505 10:41:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.505 10:41:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.505 10:41:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.505 10:41:15 -- setup/common.sh@18 -- # local node= 00:03:26.505 10:41:15 -- setup/common.sh@19 -- # local var val 00:03:26.505 10:41:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.505 10:41:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.505 10:41:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.505 10:41:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.505 10:41:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.505 10:41:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.505 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.505 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.505 10:41:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43117204 kB' 'MemAvailable: 46835804 kB' 'Buffers: 9316 kB' 'Cached: 11109272 kB' 'SwapCached: 0 kB' 'Active: 7881880 kB' 'Inactive: 3689320 kB' 'Active(anon): 7464200 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455944 kB' 'Mapped: 165868 kB' 'Shmem: 7011588 kB' 'KReclaimable: 216932 kB' 'Slab: 913576 kB' 'SReclaimable: 216932 kB' 'SUnreclaim: 696644 kB' 'KernelStack: 21744 kB' 'PageTables: 7416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8609508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214352 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:26.505 10:41:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.505 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.505 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.505 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.505 10:41:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.505 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.505 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.505 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.505 10:41:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.505 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.505 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.505 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.505 10:41:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.505 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.505 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.505 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.505 10:41:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.505 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.505 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.506 10:41:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.506 10:41:15 -- setup/common.sh@33 -- # echo 0 00:03:26.506 10:41:15 -- setup/common.sh@33 -- # return 0 00:03:26.506 10:41:15 -- setup/hugepages.sh@97 -- # anon=0 00:03:26.506 10:41:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.506 10:41:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.506 10:41:15 -- setup/common.sh@18 -- # local node= 00:03:26.506 10:41:15 -- setup/common.sh@19 -- # local var val 00:03:26.506 10:41:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.506 10:41:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.506 10:41:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.506 10:41:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.506 10:41:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.506 10:41:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.506 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43117792 kB' 'MemAvailable: 46836392 kB' 'Buffers: 9316 kB' 'Cached: 11109276 kB' 'SwapCached: 0 kB' 'Active: 7881948 kB' 'Inactive: 3689320 kB' 'Active(anon): 7464268 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456036 kB' 'Mapped: 165772 kB' 'Shmem: 7011592 kB' 'KReclaimable: 216932 kB' 'Slab: 913512 kB' 'SReclaimable: 216932 kB' 'SUnreclaim: 696580 kB' 'KernelStack: 21728 kB' 'PageTables: 7340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8609520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214320 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.507 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.507 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.508 10:41:15 -- setup/common.sh@33 -- # echo 0 00:03:26.508 10:41:15 -- setup/common.sh@33 -- # return 0 00:03:26.508 10:41:15 -- setup/hugepages.sh@99 -- # surp=0 00:03:26.508 10:41:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.508 10:41:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.508 10:41:15 -- setup/common.sh@18 -- # local node= 00:03:26.508 10:41:15 -- setup/common.sh@19 -- # local var val 00:03:26.508 10:41:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.508 10:41:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.508 10:41:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.508 10:41:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.508 10:41:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.508 10:41:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43118272 kB' 'MemAvailable: 46836872 kB' 'Buffers: 9316 kB' 'Cached: 11109288 kB' 'SwapCached: 0 kB' 'Active: 7881976 kB' 'Inactive: 3689320 kB' 'Active(anon): 7464296 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456036 kB' 'Mapped: 165772 kB' 'Shmem: 7011604 kB' 'KReclaimable: 216932 kB' 'Slab: 913512 kB' 'SReclaimable: 216932 kB' 'SUnreclaim: 696580 kB' 'KernelStack: 21728 kB' 'PageTables: 7340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8609536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214336 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.508 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.508 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.509 10:41:15 -- setup/common.sh@33 -- # echo 0 00:03:26.509 10:41:15 -- setup/common.sh@33 -- # return 0 00:03:26.509 10:41:15 -- setup/hugepages.sh@100 -- # resv=0 00:03:26.509 10:41:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.509 nr_hugepages=1024 00:03:26.509 10:41:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.509 resv_hugepages=0 00:03:26.509 10:41:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.509 surplus_hugepages=0 00:03:26.509 10:41:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.509 anon_hugepages=0 00:03:26.509 10:41:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.509 10:41:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.509 10:41:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.509 10:41:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.509 10:41:15 -- setup/common.sh@18 -- # local node= 00:03:26.509 10:41:15 -- setup/common.sh@19 -- # local var val 00:03:26.509 10:41:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.509 10:41:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.509 10:41:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.509 10:41:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.509 10:41:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.509 10:41:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43118340 kB' 'MemAvailable: 46836940 kB' 'Buffers: 9316 kB' 'Cached: 11109300 kB' 'SwapCached: 0 kB' 'Active: 7881992 kB' 'Inactive: 3689320 kB' 'Active(anon): 7464312 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456036 kB' 'Mapped: 165772 kB' 'Shmem: 7011616 kB' 'KReclaimable: 216932 kB' 'Slab: 913512 kB' 'SReclaimable: 216932 kB' 'SUnreclaim: 696580 kB' 'KernelStack: 21728 kB' 'PageTables: 7340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8609548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214336 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.509 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.509 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.510 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.510 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.511 10:41:15 -- setup/common.sh@33 -- # echo 1024 00:03:26.511 10:41:15 -- setup/common.sh@33 -- # return 0 00:03:26.511 10:41:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.511 10:41:15 -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.511 10:41:15 -- setup/hugepages.sh@27 -- # local node 00:03:26.511 10:41:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.511 10:41:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.511 10:41:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.511 10:41:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.511 10:41:15 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.511 10:41:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.511 10:41:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.511 10:41:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.511 10:41:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.511 10:41:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.511 10:41:15 -- setup/common.sh@18 -- # local node=0 00:03:26.511 10:41:15 -- setup/common.sh@19 -- # local var val 00:03:26.511 10:41:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.511 10:41:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.511 10:41:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.511 10:41:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.511 10:41:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.511 10:41:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27020804 kB' 'MemUsed: 5564564 kB' 'SwapCached: 0 kB' 'Active: 2891676 kB' 'Inactive: 176724 kB' 'Active(anon): 2705508 kB' 'Inactive(anon): 0 kB' 'Active(file): 186168 kB' 'Inactive(file): 176724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2681936 kB' 'Mapped: 70612 kB' 'AnonPages: 389728 kB' 'Shmem: 2319044 kB' 'KernelStack: 12536 kB' 'PageTables: 5084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 78632 kB' 'Slab: 391172 kB' 'SReclaimable: 78632 kB' 'SUnreclaim: 312540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.511 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.511 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@33 -- # echo 0 00:03:26.512 10:41:15 -- setup/common.sh@33 -- # return 0 00:03:26.512 10:41:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.512 10:41:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.512 10:41:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.512 10:41:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.512 10:41:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.512 10:41:15 -- setup/common.sh@18 -- # local node=1 00:03:26.512 10:41:15 -- setup/common.sh@19 -- # local var val 00:03:26.512 10:41:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.512 10:41:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.512 10:41:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.512 10:41:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.512 10:41:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.512 10:41:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698416 kB' 'MemFree: 16096624 kB' 'MemUsed: 11601792 kB' 'SwapCached: 0 kB' 'Active: 4990340 kB' 'Inactive: 3512596 kB' 'Active(anon): 4758828 kB' 'Inactive(anon): 0 kB' 'Active(file): 231512 kB' 'Inactive(file): 3512596 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8436708 kB' 'Mapped: 95160 kB' 'AnonPages: 66312 kB' 'Shmem: 4692600 kB' 'KernelStack: 9192 kB' 'PageTables: 2256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138300 kB' 'Slab: 522340 kB' 'SReclaimable: 138300 kB' 'SUnreclaim: 384040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.512 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.512 10:41:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # continue 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.513 10:41:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.513 10:41:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.513 10:41:15 -- setup/common.sh@33 -- # echo 0 00:03:26.513 10:41:15 -- setup/common.sh@33 -- # return 0 00:03:26.513 10:41:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.513 10:41:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.513 10:41:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.513 10:41:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.513 10:41:15 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:26.513 node0=512 expecting 512 00:03:26.513 10:41:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.513 10:41:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.513 10:41:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.513 10:41:15 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:26.513 node1=512 expecting 512 00:03:26.513 10:41:15 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:26.513 00:03:26.513 real 0m3.563s 00:03:26.513 user 0m1.392s 00:03:26.513 sys 0m2.225s 00:03:26.513 10:41:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:26.513 10:41:15 -- common/autotest_common.sh@10 -- # set +x 00:03:26.513 ************************************ 00:03:26.513 END TEST per_node_1G_alloc 00:03:26.513 ************************************ 00:03:26.772 10:41:15 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:26.772 10:41:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:26.772 10:41:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:26.772 10:41:15 -- common/autotest_common.sh@10 -- # set +x 00:03:26.772 ************************************ 00:03:26.772 START TEST even_2G_alloc 00:03:26.772 ************************************ 00:03:26.772 10:41:15 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:26.772 10:41:15 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:26.772 10:41:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.772 10:41:15 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.772 10:41:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.772 10:41:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.772 10:41:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.772 10:41:15 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.772 10:41:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.772 10:41:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.772 10:41:15 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.772 10:41:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.772 10:41:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.772 10:41:15 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.772 10:41:15 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:26.772 10:41:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.772 10:41:15 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:26.772 10:41:15 -- setup/hugepages.sh@83 -- # : 512 00:03:26.772 10:41:15 -- setup/hugepages.sh@84 -- # : 1 00:03:26.773 10:41:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.773 10:41:15 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:26.773 10:41:15 -- setup/hugepages.sh@83 -- # : 0 00:03:26.773 10:41:15 -- setup/hugepages.sh@84 -- # : 0 00:03:26.773 10:41:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.773 10:41:15 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:26.773 10:41:15 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:26.773 10:41:15 -- setup/hugepages.sh@153 -- # setup output 00:03:26.773 10:41:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.773 10:41:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:30.062 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.062 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.062 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.062 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.062 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.062 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.062 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.062 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.062 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.062 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.062 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.062 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.062 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.062 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.062 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.062 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.062 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.062 10:41:19 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:30.062 10:41:19 -- setup/hugepages.sh@89 -- # local node 00:03:30.062 10:41:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.062 10:41:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.062 10:41:19 -- setup/hugepages.sh@92 -- # local surp 00:03:30.062 10:41:19 -- setup/hugepages.sh@93 -- # local resv 00:03:30.062 10:41:19 -- setup/hugepages.sh@94 -- # local anon 00:03:30.062 10:41:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.062 10:41:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.062 10:41:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.062 10:41:19 -- setup/common.sh@18 -- # local node= 00:03:30.062 10:41:19 -- setup/common.sh@19 -- # local var val 00:03:30.325 10:41:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.325 10:41:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.325 10:41:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.325 10:41:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.325 10:41:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.325 10:41:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43127732 kB' 'MemAvailable: 46846328 kB' 'Buffers: 9316 kB' 'Cached: 11109404 kB' 'SwapCached: 0 kB' 'Active: 7883840 kB' 'Inactive: 3689320 kB' 'Active(anon): 7466160 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457188 kB' 'Mapped: 165992 kB' 'Shmem: 7011720 kB' 'KReclaimable: 216924 kB' 'Slab: 913736 kB' 'SReclaimable: 216924 kB' 'SUnreclaim: 696812 kB' 'KernelStack: 21776 kB' 'PageTables: 7500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8609800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214320 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.325 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.325 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.326 10:41:19 -- setup/common.sh@33 -- # echo 0 00:03:30.326 10:41:19 -- setup/common.sh@33 -- # return 0 00:03:30.326 10:41:19 -- setup/hugepages.sh@97 -- # anon=0 00:03:30.326 10:41:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.326 10:41:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.326 10:41:19 -- setup/common.sh@18 -- # local node= 00:03:30.326 10:41:19 -- setup/common.sh@19 -- # local var val 00:03:30.326 10:41:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.326 10:41:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.326 10:41:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.326 10:41:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.326 10:41:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.326 10:41:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43130932 kB' 'MemAvailable: 46849528 kB' 'Buffers: 9316 kB' 'Cached: 11109416 kB' 'SwapCached: 0 kB' 'Active: 7882884 kB' 'Inactive: 3689320 kB' 'Active(anon): 7465204 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456752 kB' 'Mapped: 165888 kB' 'Shmem: 7011732 kB' 'KReclaimable: 216924 kB' 'Slab: 913708 kB' 'SReclaimable: 216924 kB' 'SUnreclaim: 696784 kB' 'KernelStack: 21744 kB' 'PageTables: 7388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8609952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214288 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.326 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.326 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.327 10:41:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.327 10:41:19 -- setup/common.sh@33 -- # echo 0 00:03:30.327 10:41:19 -- setup/common.sh@33 -- # return 0 00:03:30.327 10:41:19 -- setup/hugepages.sh@99 -- # surp=0 00:03:30.327 10:41:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.327 10:41:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.327 10:41:19 -- setup/common.sh@18 -- # local node= 00:03:30.327 10:41:19 -- setup/common.sh@19 -- # local var val 00:03:30.327 10:41:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.327 10:41:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.327 10:41:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.327 10:41:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.327 10:41:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.327 10:41:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.327 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43131440 kB' 'MemAvailable: 46850036 kB' 'Buffers: 9316 kB' 'Cached: 11109432 kB' 'SwapCached: 0 kB' 'Active: 7882856 kB' 'Inactive: 3689320 kB' 'Active(anon): 7465176 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456740 kB' 'Mapped: 165776 kB' 'Shmem: 7011748 kB' 'KReclaimable: 216924 kB' 'Slab: 913708 kB' 'SReclaimable: 216924 kB' 'SUnreclaim: 696784 kB' 'KernelStack: 21728 kB' 'PageTables: 7372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8610340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214288 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.328 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.328 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.329 10:41:19 -- setup/common.sh@33 -- # echo 0 00:03:30.329 10:41:19 -- setup/common.sh@33 -- # return 0 00:03:30.329 10:41:19 -- setup/hugepages.sh@100 -- # resv=0 00:03:30.329 10:41:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.329 nr_hugepages=1024 00:03:30.329 10:41:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.329 resv_hugepages=0 00:03:30.329 10:41:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.329 surplus_hugepages=0 00:03:30.329 10:41:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.329 anon_hugepages=0 00:03:30.329 10:41:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.329 10:41:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.329 10:41:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.329 10:41:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.329 10:41:19 -- setup/common.sh@18 -- # local node= 00:03:30.329 10:41:19 -- setup/common.sh@19 -- # local var val 00:03:30.329 10:41:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.329 10:41:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.329 10:41:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.329 10:41:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.329 10:41:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.329 10:41:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43132944 kB' 'MemAvailable: 46851540 kB' 'Buffers: 9316 kB' 'Cached: 11109444 kB' 'SwapCached: 0 kB' 'Active: 7882856 kB' 'Inactive: 3689320 kB' 'Active(anon): 7465176 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456756 kB' 'Mapped: 165776 kB' 'Shmem: 7011760 kB' 'KReclaimable: 216924 kB' 'Slab: 913708 kB' 'SReclaimable: 216924 kB' 'SUnreclaim: 696784 kB' 'KernelStack: 21728 kB' 'PageTables: 7344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8610356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214288 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.329 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.329 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.330 10:41:19 -- setup/common.sh@33 -- # echo 1024 00:03:30.330 10:41:19 -- setup/common.sh@33 -- # return 0 00:03:30.330 10:41:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.330 10:41:19 -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.330 10:41:19 -- setup/hugepages.sh@27 -- # local node 00:03:30.330 10:41:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.330 10:41:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.330 10:41:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.330 10:41:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.330 10:41:19 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.330 10:41:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.330 10:41:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.330 10:41:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.330 10:41:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.330 10:41:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.330 10:41:19 -- setup/common.sh@18 -- # local node=0 00:03:30.330 10:41:19 -- setup/common.sh@19 -- # local var val 00:03:30.330 10:41:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.330 10:41:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.330 10:41:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.330 10:41:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.330 10:41:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.330 10:41:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.330 10:41:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27030840 kB' 'MemUsed: 5554528 kB' 'SwapCached: 0 kB' 'Active: 2892092 kB' 'Inactive: 176724 kB' 'Active(anon): 2705924 kB' 'Inactive(anon): 0 kB' 'Active(file): 186168 kB' 'Inactive(file): 176724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2681968 kB' 'Mapped: 70616 kB' 'AnonPages: 390024 kB' 'Shmem: 2319076 kB' 'KernelStack: 12520 kB' 'PageTables: 5084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 78632 kB' 'Slab: 391160 kB' 'SReclaimable: 78632 kB' 'SUnreclaim: 312528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.330 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.330 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@33 -- # echo 0 00:03:30.331 10:41:19 -- setup/common.sh@33 -- # return 0 00:03:30.331 10:41:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.331 10:41:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.331 10:41:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.331 10:41:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:30.331 10:41:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.331 10:41:19 -- setup/common.sh@18 -- # local node=1 00:03:30.331 10:41:19 -- setup/common.sh@19 -- # local var val 00:03:30.331 10:41:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.331 10:41:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.331 10:41:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:30.331 10:41:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:30.331 10:41:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.331 10:41:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.331 10:41:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698416 kB' 'MemFree: 16101852 kB' 'MemUsed: 11596564 kB' 'SwapCached: 0 kB' 'Active: 4990868 kB' 'Inactive: 3512596 kB' 'Active(anon): 4759356 kB' 'Inactive(anon): 0 kB' 'Active(file): 231512 kB' 'Inactive(file): 3512596 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8436824 kB' 'Mapped: 95160 kB' 'AnonPages: 66752 kB' 'Shmem: 4692716 kB' 'KernelStack: 9208 kB' 'PageTables: 2204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138292 kB' 'Slab: 522548 kB' 'SReclaimable: 138292 kB' 'SUnreclaim: 384256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.331 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.331 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # continue 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.332 10:41:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.332 10:41:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.332 10:41:19 -- setup/common.sh@33 -- # echo 0 00:03:30.332 10:41:19 -- setup/common.sh@33 -- # return 0 00:03:30.332 10:41:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.332 10:41:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.332 10:41:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.332 10:41:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.332 10:41:19 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:30.332 node0=512 expecting 512 00:03:30.332 10:41:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.332 10:41:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.332 10:41:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.332 10:41:19 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:30.332 node1=512 expecting 512 00:03:30.332 10:41:19 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:30.332 00:03:30.332 real 0m3.734s 00:03:30.332 user 0m1.467s 00:03:30.332 sys 0m2.340s 00:03:30.332 10:41:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:30.332 10:41:19 -- common/autotest_common.sh@10 -- # set +x 00:03:30.332 ************************************ 00:03:30.332 END TEST even_2G_alloc 00:03:30.333 ************************************ 00:03:30.333 10:41:19 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:30.333 10:41:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.333 10:41:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.333 10:41:19 -- common/autotest_common.sh@10 -- # set +x 00:03:30.333 ************************************ 00:03:30.333 START TEST odd_alloc 00:03:30.333 ************************************ 00:03:30.333 10:41:19 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:30.333 10:41:19 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:30.333 10:41:19 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:30.333 10:41:19 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.333 10:41:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.333 10:41:19 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:30.333 10:41:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.333 10:41:19 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.333 10:41:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.333 10:41:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:30.333 10:41:19 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.333 10:41:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.333 10:41:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.333 10:41:19 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.333 10:41:19 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:30.333 10:41:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.333 10:41:19 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:30.333 10:41:19 -- setup/hugepages.sh@83 -- # : 513 00:03:30.333 10:41:19 -- setup/hugepages.sh@84 -- # : 1 00:03:30.333 10:41:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.333 10:41:19 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:30.333 10:41:19 -- setup/hugepages.sh@83 -- # : 0 00:03:30.333 10:41:19 -- setup/hugepages.sh@84 -- # : 0 00:03:30.333 10:41:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.333 10:41:19 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:30.333 10:41:19 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:30.333 10:41:19 -- setup/hugepages.sh@160 -- # setup output 00:03:30.333 10:41:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.333 10:41:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:33.624 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.624 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.624 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.624 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.624 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.624 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.624 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.624 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.624 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.624 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.624 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.624 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.624 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.624 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.624 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.624 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.624 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:33.624 10:41:22 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:33.624 10:41:22 -- setup/hugepages.sh@89 -- # local node 00:03:33.624 10:41:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.624 10:41:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.624 10:41:22 -- setup/hugepages.sh@92 -- # local surp 00:03:33.624 10:41:22 -- setup/hugepages.sh@93 -- # local resv 00:03:33.624 10:41:22 -- setup/hugepages.sh@94 -- # local anon 00:03:33.624 10:41:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.624 10:41:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.624 10:41:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.624 10:41:22 -- setup/common.sh@18 -- # local node= 00:03:33.624 10:41:22 -- setup/common.sh@19 -- # local var val 00:03:33.624 10:41:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.624 10:41:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.624 10:41:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.624 10:41:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.624 10:41:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.624 10:41:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.624 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.624 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43139264 kB' 'MemAvailable: 46857860 kB' 'Buffers: 9316 kB' 'Cached: 11109544 kB' 'SwapCached: 0 kB' 'Active: 7885072 kB' 'Inactive: 3689320 kB' 'Active(anon): 7467392 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458872 kB' 'Mapped: 165848 kB' 'Shmem: 7011860 kB' 'KReclaimable: 216924 kB' 'Slab: 913788 kB' 'SReclaimable: 216924 kB' 'SUnreclaim: 696864 kB' 'KernelStack: 21776 kB' 'PageTables: 7584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480896 kB' 'Committed_AS: 8615512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214480 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.625 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.625 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.888 10:41:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.888 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.888 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.888 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.888 10:41:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.888 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.888 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.888 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.888 10:41:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.888 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.888 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.888 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.888 10:41:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.888 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.888 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.888 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.888 10:41:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.888 10:41:22 -- setup/common.sh@33 -- # echo 0 00:03:33.888 10:41:22 -- setup/common.sh@33 -- # return 0 00:03:33.888 10:41:22 -- setup/hugepages.sh@97 -- # anon=0 00:03:33.888 10:41:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.888 10:41:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.888 10:41:22 -- setup/common.sh@18 -- # local node= 00:03:33.888 10:41:22 -- setup/common.sh@19 -- # local var val 00:03:33.888 10:41:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.888 10:41:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.888 10:41:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.888 10:41:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.888 10:41:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.888 10:41:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.888 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.888 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43145888 kB' 'MemAvailable: 46864484 kB' 'Buffers: 9316 kB' 'Cached: 11109548 kB' 'SwapCached: 0 kB' 'Active: 7884452 kB' 'Inactive: 3689320 kB' 'Active(anon): 7466772 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459268 kB' 'Mapped: 165780 kB' 'Shmem: 7011864 kB' 'KReclaimable: 216924 kB' 'Slab: 913840 kB' 'SReclaimable: 216924 kB' 'SUnreclaim: 696916 kB' 'KernelStack: 21728 kB' 'PageTables: 7340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480896 kB' 'Committed_AS: 8615524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214384 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.889 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.889 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.890 10:41:22 -- setup/common.sh@33 -- # echo 0 00:03:33.890 10:41:22 -- setup/common.sh@33 -- # return 0 00:03:33.890 10:41:22 -- setup/hugepages.sh@99 -- # surp=0 00:03:33.890 10:41:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.890 10:41:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.890 10:41:22 -- setup/common.sh@18 -- # local node= 00:03:33.890 10:41:22 -- setup/common.sh@19 -- # local var val 00:03:33.890 10:41:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.890 10:41:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.890 10:41:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.890 10:41:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.890 10:41:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.890 10:41:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43144716 kB' 'MemAvailable: 46863312 kB' 'Buffers: 9316 kB' 'Cached: 11109560 kB' 'SwapCached: 0 kB' 'Active: 7885444 kB' 'Inactive: 3689320 kB' 'Active(anon): 7467764 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459244 kB' 'Mapped: 165780 kB' 'Shmem: 7011876 kB' 'KReclaimable: 216924 kB' 'Slab: 913868 kB' 'SReclaimable: 216924 kB' 'SUnreclaim: 696944 kB' 'KernelStack: 21856 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480896 kB' 'Committed_AS: 8614020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214480 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.890 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.890 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.891 10:41:22 -- setup/common.sh@33 -- # echo 0 00:03:33.891 10:41:22 -- setup/common.sh@33 -- # return 0 00:03:33.891 10:41:22 -- setup/hugepages.sh@100 -- # resv=0 00:03:33.891 10:41:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:33.891 nr_hugepages=1025 00:03:33.891 10:41:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.891 resv_hugepages=0 00:03:33.891 10:41:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.891 surplus_hugepages=0 00:03:33.891 10:41:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.891 anon_hugepages=0 00:03:33.891 10:41:22 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:33.891 10:41:22 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:33.891 10:41:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.891 10:41:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.891 10:41:22 -- setup/common.sh@18 -- # local node= 00:03:33.891 10:41:22 -- setup/common.sh@19 -- # local var val 00:03:33.891 10:41:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.891 10:41:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.891 10:41:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.891 10:41:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.891 10:41:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.891 10:41:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43142684 kB' 'MemAvailable: 46861280 kB' 'Buffers: 9316 kB' 'Cached: 11109560 kB' 'SwapCached: 0 kB' 'Active: 7885676 kB' 'Inactive: 3689320 kB' 'Active(anon): 7467996 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459008 kB' 'Mapped: 165780 kB' 'Shmem: 7011876 kB' 'KReclaimable: 216924 kB' 'Slab: 913868 kB' 'SReclaimable: 216924 kB' 'SUnreclaim: 696944 kB' 'KernelStack: 21936 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480896 kB' 'Committed_AS: 8615552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214512 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.891 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.891 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.892 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.892 10:41:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.893 10:41:22 -- setup/common.sh@33 -- # echo 1025 00:03:33.893 10:41:22 -- setup/common.sh@33 -- # return 0 00:03:33.893 10:41:22 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:33.893 10:41:22 -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.893 10:41:22 -- setup/hugepages.sh@27 -- # local node 00:03:33.893 10:41:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.893 10:41:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:33.893 10:41:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.893 10:41:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:33.893 10:41:22 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.893 10:41:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.893 10:41:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.893 10:41:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.893 10:41:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.893 10:41:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.893 10:41:22 -- setup/common.sh@18 -- # local node=0 00:03:33.893 10:41:22 -- setup/common.sh@19 -- # local var val 00:03:33.893 10:41:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.893 10:41:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.893 10:41:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.893 10:41:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.893 10:41:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.893 10:41:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.893 10:41:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27031832 kB' 'MemUsed: 5553536 kB' 'SwapCached: 0 kB' 'Active: 2896388 kB' 'Inactive: 176724 kB' 'Active(anon): 2710220 kB' 'Inactive(anon): 0 kB' 'Active(file): 186168 kB' 'Inactive(file): 176724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2681980 kB' 'Mapped: 70620 kB' 'AnonPages: 394368 kB' 'Shmem: 2319088 kB' 'KernelStack: 12792 kB' 'PageTables: 5624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 78632 kB' 'Slab: 391492 kB' 'SReclaimable: 78632 kB' 'SUnreclaim: 312860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.893 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@33 -- # echo 0 00:03:33.894 10:41:22 -- setup/common.sh@33 -- # return 0 00:03:33.894 10:41:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.894 10:41:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.894 10:41:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.894 10:41:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:33.894 10:41:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.894 10:41:22 -- setup/common.sh@18 -- # local node=1 00:03:33.894 10:41:22 -- setup/common.sh@19 -- # local var val 00:03:33.894 10:41:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.894 10:41:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.894 10:41:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:33.894 10:41:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:33.894 10:41:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.894 10:41:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698416 kB' 'MemFree: 16105872 kB' 'MemUsed: 11592544 kB' 'SwapCached: 0 kB' 'Active: 4991592 kB' 'Inactive: 3512596 kB' 'Active(anon): 4760080 kB' 'Inactive(anon): 0 kB' 'Active(file): 231512 kB' 'Inactive(file): 3512596 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8436936 kB' 'Mapped: 96016 kB' 'AnonPages: 66916 kB' 'Shmem: 4692828 kB' 'KernelStack: 9240 kB' 'PageTables: 2368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138292 kB' 'Slab: 522376 kB' 'SReclaimable: 138292 kB' 'SUnreclaim: 384084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.894 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.894 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # continue 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 10:41:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 10:41:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 10:41:22 -- setup/common.sh@33 -- # echo 0 00:03:33.895 10:41:22 -- setup/common.sh@33 -- # return 0 00:03:33.895 10:41:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.895 10:41:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.895 10:41:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.895 10:41:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.895 10:41:22 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:33.895 node0=512 expecting 513 00:03:33.895 10:41:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.895 10:41:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.895 10:41:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.895 10:41:22 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:33.895 node1=513 expecting 512 00:03:33.895 10:41:22 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:33.895 00:03:33.895 real 0m3.489s 00:03:33.895 user 0m1.291s 00:03:33.895 sys 0m2.252s 00:03:33.895 10:41:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:33.895 10:41:22 -- common/autotest_common.sh@10 -- # set +x 00:03:33.895 ************************************ 00:03:33.895 END TEST odd_alloc 00:03:33.895 ************************************ 00:03:33.895 10:41:22 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:33.895 10:41:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:33.895 10:41:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:33.895 10:41:22 -- common/autotest_common.sh@10 -- # set +x 00:03:33.895 ************************************ 00:03:33.895 START TEST custom_alloc 00:03:33.895 ************************************ 00:03:33.895 10:41:22 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:33.895 10:41:22 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:33.895 10:41:22 -- setup/hugepages.sh@169 -- # local node 00:03:33.895 10:41:22 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:33.895 10:41:22 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:33.895 10:41:22 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:33.895 10:41:22 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:33.895 10:41:22 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:33.895 10:41:22 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:33.895 10:41:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.895 10:41:22 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:33.895 10:41:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:33.895 10:41:22 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:33.895 10:41:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.895 10:41:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:33.895 10:41:22 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:33.895 10:41:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.895 10:41:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.895 10:41:22 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:33.895 10:41:22 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:33.895 10:41:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.895 10:41:22 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:33.895 10:41:22 -- setup/hugepages.sh@83 -- # : 256 00:03:33.895 10:41:22 -- setup/hugepages.sh@84 -- # : 1 00:03:33.895 10:41:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.895 10:41:22 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:33.895 10:41:22 -- setup/hugepages.sh@83 -- # : 0 00:03:33.895 10:41:22 -- setup/hugepages.sh@84 -- # : 0 00:03:33.895 10:41:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.895 10:41:22 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:33.895 10:41:22 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:33.895 10:41:22 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:33.895 10:41:22 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:33.895 10:41:22 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:33.895 10:41:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.895 10:41:22 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:33.895 10:41:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:33.895 10:41:22 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:33.895 10:41:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.895 10:41:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:33.895 10:41:22 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:33.895 10:41:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.895 10:41:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.895 10:41:22 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:33.895 10:41:22 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:33.895 10:41:22 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:33.895 10:41:22 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:33.895 10:41:22 -- setup/hugepages.sh@78 -- # return 0 00:03:33.895 10:41:22 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:33.895 10:41:22 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:33.895 10:41:22 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:33.895 10:41:22 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:33.895 10:41:22 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:33.895 10:41:22 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:33.895 10:41:22 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:33.895 10:41:22 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:33.895 10:41:22 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:33.895 10:41:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.895 10:41:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:33.895 10:41:22 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:33.895 10:41:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.895 10:41:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.895 10:41:22 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:33.895 10:41:22 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:33.895 10:41:22 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:33.895 10:41:22 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:33.895 10:41:22 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:33.895 10:41:22 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:33.895 10:41:22 -- setup/hugepages.sh@78 -- # return 0 00:03:33.895 10:41:22 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:33.895 10:41:22 -- setup/hugepages.sh@187 -- # setup output 00:03:33.895 10:41:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.895 10:41:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:37.184 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:37.184 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:37.184 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:37.184 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:37.184 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:37.184 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:37.184 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:37.184 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:37.184 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:37.184 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:37.184 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:37.184 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:37.184 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:37.184 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:37.184 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:37.185 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:37.185 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:37.185 10:41:26 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:37.185 10:41:26 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:37.185 10:41:26 -- setup/hugepages.sh@89 -- # local node 00:03:37.185 10:41:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.185 10:41:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.185 10:41:26 -- setup/hugepages.sh@92 -- # local surp 00:03:37.185 10:41:26 -- setup/hugepages.sh@93 -- # local resv 00:03:37.185 10:41:26 -- setup/hugepages.sh@94 -- # local anon 00:03:37.185 10:41:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.185 10:41:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.185 10:41:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.185 10:41:26 -- setup/common.sh@18 -- # local node= 00:03:37.185 10:41:26 -- setup/common.sh@19 -- # local var val 00:03:37.185 10:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.185 10:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.185 10:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.185 10:41:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.185 10:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.185 10:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 42089100 kB' 'MemAvailable: 45807692 kB' 'Buffers: 9316 kB' 'Cached: 11109676 kB' 'SwapCached: 0 kB' 'Active: 7885860 kB' 'Inactive: 3689320 kB' 'Active(anon): 7468180 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459456 kB' 'Mapped: 165916 kB' 'Shmem: 7011992 kB' 'KReclaimable: 216916 kB' 'Slab: 914400 kB' 'SReclaimable: 216916 kB' 'SUnreclaim: 697484 kB' 'KernelStack: 21744 kB' 'PageTables: 7404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957632 kB' 'Committed_AS: 8612496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214352 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.185 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.185 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.186 10:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.186 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.186 10:41:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.186 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.186 10:41:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.186 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.186 10:41:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.186 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.186 10:41:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.186 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.186 10:41:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.186 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.186 10:41:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.186 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.186 10:41:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.186 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.186 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.186 10:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.186 10:41:26 -- setup/common.sh@33 -- # echo 0 00:03:37.186 10:41:26 -- setup/common.sh@33 -- # return 0 00:03:37.449 10:41:26 -- setup/hugepages.sh@97 -- # anon=0 00:03:37.449 10:41:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.449 10:41:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.449 10:41:26 -- setup/common.sh@18 -- # local node= 00:03:37.449 10:41:26 -- setup/common.sh@19 -- # local var val 00:03:37.449 10:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.449 10:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.449 10:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.449 10:41:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.449 10:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.449 10:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.449 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.449 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.449 10:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 42090136 kB' 'MemAvailable: 45808728 kB' 'Buffers: 9316 kB' 'Cached: 11109680 kB' 'SwapCached: 0 kB' 'Active: 7887984 kB' 'Inactive: 3689320 kB' 'Active(anon): 7470304 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461568 kB' 'Mapped: 166420 kB' 'Shmem: 7011996 kB' 'KReclaimable: 216916 kB' 'Slab: 914392 kB' 'SReclaimable: 216916 kB' 'SUnreclaim: 697476 kB' 'KernelStack: 21728 kB' 'PageTables: 7360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957632 kB' 'Committed_AS: 8614560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214272 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:37.449 10:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.449 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.449 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.449 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.449 10:41:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.449 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.449 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.449 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.449 10:41:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.449 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.449 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.449 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.449 10:41:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.449 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.449 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.449 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.449 10:41:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.449 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.449 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.449 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.449 10:41:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.449 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.449 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.449 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.450 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.450 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.451 10:41:26 -- setup/common.sh@33 -- # echo 0 00:03:37.451 10:41:26 -- setup/common.sh@33 -- # return 0 00:03:37.451 10:41:26 -- setup/hugepages.sh@99 -- # surp=0 00:03:37.451 10:41:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.451 10:41:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.451 10:41:26 -- setup/common.sh@18 -- # local node= 00:03:37.451 10:41:26 -- setup/common.sh@19 -- # local var val 00:03:37.451 10:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.451 10:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.451 10:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.451 10:41:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.451 10:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.451 10:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 42088536 kB' 'MemAvailable: 45807128 kB' 'Buffers: 9316 kB' 'Cached: 11109680 kB' 'SwapCached: 0 kB' 'Active: 7890516 kB' 'Inactive: 3689320 kB' 'Active(anon): 7472836 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464636 kB' 'Mapped: 166372 kB' 'Shmem: 7011996 kB' 'KReclaimable: 216916 kB' 'Slab: 914392 kB' 'SReclaimable: 216916 kB' 'SUnreclaim: 697476 kB' 'KernelStack: 21728 kB' 'PageTables: 7368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957632 kB' 'Committed_AS: 8617756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214276 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.451 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.451 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.452 10:41:26 -- setup/common.sh@33 -- # echo 0 00:03:37.452 10:41:26 -- setup/common.sh@33 -- # return 0 00:03:37.452 10:41:26 -- setup/hugepages.sh@100 -- # resv=0 00:03:37.452 10:41:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:37.452 nr_hugepages=1536 00:03:37.452 10:41:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.452 resv_hugepages=0 00:03:37.452 10:41:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.452 surplus_hugepages=0 00:03:37.452 10:41:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.452 anon_hugepages=0 00:03:37.452 10:41:26 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:37.452 10:41:26 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:37.452 10:41:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.452 10:41:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.452 10:41:26 -- setup/common.sh@18 -- # local node= 00:03:37.452 10:41:26 -- setup/common.sh@19 -- # local var val 00:03:37.452 10:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.452 10:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.452 10:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.452 10:41:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.452 10:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.452 10:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.452 10:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 42091468 kB' 'MemAvailable: 45810060 kB' 'Buffers: 9316 kB' 'Cached: 11109708 kB' 'SwapCached: 0 kB' 'Active: 7885696 kB' 'Inactive: 3689320 kB' 'Active(anon): 7468016 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459316 kB' 'Mapped: 166676 kB' 'Shmem: 7012024 kB' 'KReclaimable: 216916 kB' 'Slab: 914364 kB' 'SReclaimable: 216916 kB' 'SUnreclaim: 697448 kB' 'KernelStack: 21744 kB' 'PageTables: 7428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957632 kB' 'Committed_AS: 8612744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214288 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.452 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.452 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.453 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.453 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.454 10:41:26 -- setup/common.sh@33 -- # echo 1536 00:03:37.454 10:41:26 -- setup/common.sh@33 -- # return 0 00:03:37.454 10:41:26 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:37.454 10:41:26 -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.454 10:41:26 -- setup/hugepages.sh@27 -- # local node 00:03:37.454 10:41:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.454 10:41:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:37.454 10:41:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.454 10:41:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:37.454 10:41:26 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:37.454 10:41:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.454 10:41:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.454 10:41:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.454 10:41:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.454 10:41:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.454 10:41:26 -- setup/common.sh@18 -- # local node=0 00:03:37.454 10:41:26 -- setup/common.sh@19 -- # local var val 00:03:37.454 10:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.454 10:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.454 10:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.454 10:41:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.454 10:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.454 10:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27023740 kB' 'MemUsed: 5561628 kB' 'SwapCached: 0 kB' 'Active: 2892596 kB' 'Inactive: 176724 kB' 'Active(anon): 2706428 kB' 'Inactive(anon): 0 kB' 'Active(file): 186168 kB' 'Inactive(file): 176724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2682032 kB' 'Mapped: 70624 kB' 'AnonPages: 390420 kB' 'Shmem: 2319140 kB' 'KernelStack: 12488 kB' 'PageTables: 5000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 78624 kB' 'Slab: 391592 kB' 'SReclaimable: 78624 kB' 'SUnreclaim: 312968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.454 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.454 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@33 -- # echo 0 00:03:37.455 10:41:26 -- setup/common.sh@33 -- # return 0 00:03:37.455 10:41:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.455 10:41:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.455 10:41:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.455 10:41:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:37.455 10:41:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.455 10:41:26 -- setup/common.sh@18 -- # local node=1 00:03:37.455 10:41:26 -- setup/common.sh@19 -- # local var val 00:03:37.455 10:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.455 10:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.455 10:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:37.455 10:41:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:37.455 10:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.455 10:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698416 kB' 'MemFree: 15060632 kB' 'MemUsed: 12637784 kB' 'SwapCached: 0 kB' 'Active: 4997768 kB' 'Inactive: 3512596 kB' 'Active(anon): 4766256 kB' 'Inactive(anon): 0 kB' 'Active(file): 231512 kB' 'Inactive(file): 3512596 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8437004 kB' 'Mapped: 95948 kB' 'AnonPages: 73488 kB' 'Shmem: 4692896 kB' 'KernelStack: 9224 kB' 'PageTables: 2324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138292 kB' 'Slab: 522772 kB' 'SReclaimable: 138292 kB' 'SUnreclaim: 384480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.455 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.455 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # continue 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.456 10:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.456 10:41:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.456 10:41:26 -- setup/common.sh@33 -- # echo 0 00:03:37.456 10:41:26 -- setup/common.sh@33 -- # return 0 00:03:37.456 10:41:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.456 10:41:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.456 10:41:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.456 10:41:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.456 10:41:26 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:37.456 node0=512 expecting 512 00:03:37.456 10:41:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.456 10:41:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.456 10:41:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.456 10:41:26 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:37.456 node1=1024 expecting 1024 00:03:37.456 10:41:26 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:37.456 00:03:37.456 real 0m3.493s 00:03:37.456 user 0m1.300s 00:03:37.456 sys 0m2.259s 00:03:37.456 10:41:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:37.456 10:41:26 -- common/autotest_common.sh@10 -- # set +x 00:03:37.456 ************************************ 00:03:37.456 END TEST custom_alloc 00:03:37.456 ************************************ 00:03:37.456 10:41:26 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:37.456 10:41:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.456 10:41:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.456 10:41:26 -- common/autotest_common.sh@10 -- # set +x 00:03:37.456 ************************************ 00:03:37.456 START TEST no_shrink_alloc 00:03:37.456 ************************************ 00:03:37.456 10:41:26 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:37.456 10:41:26 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:37.456 10:41:26 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:37.456 10:41:26 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:37.456 10:41:26 -- setup/hugepages.sh@51 -- # shift 00:03:37.456 10:41:26 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:37.456 10:41:26 -- setup/hugepages.sh@52 -- # local node_ids 00:03:37.456 10:41:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.456 10:41:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:37.456 10:41:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:37.456 10:41:26 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:37.456 10:41:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.456 10:41:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:37.456 10:41:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:37.456 10:41:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.456 10:41:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.456 10:41:26 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:37.456 10:41:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.456 10:41:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:37.456 10:41:26 -- setup/hugepages.sh@73 -- # return 0 00:03:37.456 10:41:26 -- setup/hugepages.sh@198 -- # setup output 00:03:37.456 10:41:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.456 10:41:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:40.744 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.744 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.744 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.744 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.744 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.744 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.744 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.744 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.744 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.744 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.744 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.744 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.744 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.744 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.744 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.744 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.744 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:41.006 10:41:29 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:41.006 10:41:29 -- setup/hugepages.sh@89 -- # local node 00:03:41.006 10:41:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.006 10:41:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.006 10:41:29 -- setup/hugepages.sh@92 -- # local surp 00:03:41.006 10:41:29 -- setup/hugepages.sh@93 -- # local resv 00:03:41.006 10:41:29 -- setup/hugepages.sh@94 -- # local anon 00:03:41.006 10:41:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.006 10:41:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.006 10:41:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.006 10:41:29 -- setup/common.sh@18 -- # local node= 00:03:41.006 10:41:29 -- setup/common.sh@19 -- # local var val 00:03:41.006 10:41:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.006 10:41:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.006 10:41:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.006 10:41:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.006 10:41:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.006 10:41:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.006 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.006 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.006 10:41:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43106696 kB' 'MemAvailable: 46825288 kB' 'Buffers: 9316 kB' 'Cached: 11109804 kB' 'SwapCached: 0 kB' 'Active: 7886004 kB' 'Inactive: 3689320 kB' 'Active(anon): 7468324 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459452 kB' 'Mapped: 165544 kB' 'Shmem: 7012120 kB' 'KReclaimable: 216916 kB' 'Slab: 913928 kB' 'SReclaimable: 216916 kB' 'SUnreclaim: 697012 kB' 'KernelStack: 21744 kB' 'PageTables: 7388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8612388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214320 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:41.006 10:41:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.006 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.006 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.006 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.006 10:41:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.006 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.006 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.006 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.006 10:41:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.006 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.007 10:41:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.007 10:41:29 -- setup/common.sh@33 -- # echo 0 00:03:41.007 10:41:29 -- setup/common.sh@33 -- # return 0 00:03:41.007 10:41:29 -- setup/hugepages.sh@97 -- # anon=0 00:03:41.007 10:41:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.007 10:41:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.007 10:41:29 -- setup/common.sh@18 -- # local node= 00:03:41.007 10:41:29 -- setup/common.sh@19 -- # local var val 00:03:41.007 10:41:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.007 10:41:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.007 10:41:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.007 10:41:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.007 10:41:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.007 10:41:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.007 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43107552 kB' 'MemAvailable: 46826144 kB' 'Buffers: 9316 kB' 'Cached: 11109808 kB' 'SwapCached: 0 kB' 'Active: 7886288 kB' 'Inactive: 3689320 kB' 'Active(anon): 7468608 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459780 kB' 'Mapped: 165544 kB' 'Shmem: 7012124 kB' 'KReclaimable: 216916 kB' 'Slab: 913896 kB' 'SReclaimable: 216916 kB' 'SUnreclaim: 696980 kB' 'KernelStack: 21712 kB' 'PageTables: 7300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8612400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214304 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.008 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.008 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 10:41:29 -- setup/common.sh@33 -- # echo 0 00:03:41.009 10:41:29 -- setup/common.sh@33 -- # return 0 00:03:41.009 10:41:29 -- setup/hugepages.sh@99 -- # surp=0 00:03:41.009 10:41:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.009 10:41:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.009 10:41:29 -- setup/common.sh@18 -- # local node= 00:03:41.009 10:41:29 -- setup/common.sh@19 -- # local var val 00:03:41.009 10:41:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.009 10:41:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.009 10:41:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.009 10:41:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.009 10:41:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.009 10:41:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.009 10:41:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43106856 kB' 'MemAvailable: 46825448 kB' 'Buffers: 9316 kB' 'Cached: 11109820 kB' 'SwapCached: 0 kB' 'Active: 7885732 kB' 'Inactive: 3689320 kB' 'Active(anon): 7468052 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459212 kB' 'Mapped: 165448 kB' 'Shmem: 7012136 kB' 'KReclaimable: 216916 kB' 'Slab: 913860 kB' 'SReclaimable: 216916 kB' 'SUnreclaim: 696944 kB' 'KernelStack: 21728 kB' 'PageTables: 7340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8612412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214304 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.009 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.010 10:41:29 -- setup/common.sh@33 -- # echo 0 00:03:41.010 10:41:29 -- setup/common.sh@33 -- # return 0 00:03:41.010 10:41:29 -- setup/hugepages.sh@100 -- # resv=0 00:03:41.010 10:41:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:41.010 nr_hugepages=1024 00:03:41.010 10:41:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.010 resv_hugepages=0 00:03:41.010 10:41:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.010 surplus_hugepages=0 00:03:41.010 10:41:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.010 anon_hugepages=0 00:03:41.010 10:41:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.010 10:41:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:41.010 10:41:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.010 10:41:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.010 10:41:29 -- setup/common.sh@18 -- # local node= 00:03:41.010 10:41:29 -- setup/common.sh@19 -- # local var val 00:03:41.010 10:41:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.010 10:41:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.010 10:41:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.010 10:41:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.010 10:41:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.010 10:41:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43106856 kB' 'MemAvailable: 46825448 kB' 'Buffers: 9316 kB' 'Cached: 11109836 kB' 'SwapCached: 0 kB' 'Active: 7885788 kB' 'Inactive: 3689320 kB' 'Active(anon): 7468108 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459208 kB' 'Mapped: 165448 kB' 'Shmem: 7012152 kB' 'KReclaimable: 216916 kB' 'Slab: 913860 kB' 'SReclaimable: 216916 kB' 'SUnreclaim: 696944 kB' 'KernelStack: 21712 kB' 'PageTables: 7284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8612428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214304 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.010 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.010 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.011 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.011 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.012 10:41:29 -- setup/common.sh@33 -- # echo 1024 00:03:41.012 10:41:29 -- setup/common.sh@33 -- # return 0 00:03:41.012 10:41:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.012 10:41:29 -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.012 10:41:29 -- setup/hugepages.sh@27 -- # local node 00:03:41.012 10:41:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.012 10:41:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:41.012 10:41:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.012 10:41:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:41.012 10:41:29 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:41.012 10:41:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.012 10:41:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.012 10:41:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.012 10:41:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.012 10:41:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.012 10:41:29 -- setup/common.sh@18 -- # local node=0 00:03:41.012 10:41:29 -- setup/common.sh@19 -- # local var val 00:03:41.012 10:41:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.012 10:41:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.012 10:41:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.012 10:41:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.012 10:41:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.012 10:41:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 25970980 kB' 'MemUsed: 6614388 kB' 'SwapCached: 0 kB' 'Active: 2894020 kB' 'Inactive: 176724 kB' 'Active(anon): 2707852 kB' 'Inactive(anon): 0 kB' 'Active(file): 186168 kB' 'Inactive(file): 176724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2682120 kB' 'Mapped: 70624 kB' 'AnonPages: 391812 kB' 'Shmem: 2319228 kB' 'KernelStack: 12552 kB' 'PageTables: 5184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 78624 kB' 'Slab: 391216 kB' 'SReclaimable: 78624 kB' 'SUnreclaim: 312592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.012 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.012 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # continue 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.013 10:41:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.013 10:41:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.013 10:41:29 -- setup/common.sh@33 -- # echo 0 00:03:41.013 10:41:29 -- setup/common.sh@33 -- # return 0 00:03:41.013 10:41:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.013 10:41:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.013 10:41:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.013 10:41:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.013 10:41:29 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:41.013 node0=1024 expecting 1024 00:03:41.013 10:41:29 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:41.013 10:41:29 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:41.013 10:41:29 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:41.013 10:41:29 -- setup/hugepages.sh@202 -- # setup output 00:03:41.013 10:41:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.013 10:41:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:44.302 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.302 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.302 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.302 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.302 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.302 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.302 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.302 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.302 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.302 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.302 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.302 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.302 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.302 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.302 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.302 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.302 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:44.302 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:44.302 10:41:33 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:44.302 10:41:33 -- setup/hugepages.sh@89 -- # local node 00:03:44.302 10:41:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.302 10:41:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.302 10:41:33 -- setup/hugepages.sh@92 -- # local surp 00:03:44.302 10:41:33 -- setup/hugepages.sh@93 -- # local resv 00:03:44.302 10:41:33 -- setup/hugepages.sh@94 -- # local anon 00:03:44.302 10:41:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.302 10:41:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.302 10:41:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.302 10:41:33 -- setup/common.sh@18 -- # local node= 00:03:44.302 10:41:33 -- setup/common.sh@19 -- # local var val 00:03:44.302 10:41:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.302 10:41:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.302 10:41:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.302 10:41:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.302 10:41:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.302 10:41:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.302 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.302 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.302 10:41:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43141748 kB' 'MemAvailable: 46860336 kB' 'Buffers: 9316 kB' 'Cached: 11109920 kB' 'SwapCached: 0 kB' 'Active: 7886716 kB' 'Inactive: 3689320 kB' 'Active(anon): 7469036 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460108 kB' 'Mapped: 165844 kB' 'Shmem: 7012236 kB' 'KReclaimable: 216908 kB' 'Slab: 913764 kB' 'SReclaimable: 216908 kB' 'SUnreclaim: 696856 kB' 'KernelStack: 21840 kB' 'PageTables: 7524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8617580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214544 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:44.302 10:41:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.302 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.302 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.302 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.302 10:41:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.302 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.302 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.302 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.302 10:41:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.302 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.302 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.302 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.302 10:41:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.302 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.302 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.302 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.302 10:41:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.302 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.302 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.302 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.302 10:41:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.302 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.302 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.303 10:41:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.303 10:41:33 -- setup/common.sh@33 -- # echo 0 00:03:44.303 10:41:33 -- setup/common.sh@33 -- # return 0 00:03:44.303 10:41:33 -- setup/hugepages.sh@97 -- # anon=0 00:03:44.303 10:41:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.303 10:41:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.303 10:41:33 -- setup/common.sh@18 -- # local node= 00:03:44.303 10:41:33 -- setup/common.sh@19 -- # local var val 00:03:44.303 10:41:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.303 10:41:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.303 10:41:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.303 10:41:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.303 10:41:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.303 10:41:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.303 10:41:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43141904 kB' 'MemAvailable: 46860492 kB' 'Buffers: 9316 kB' 'Cached: 11109920 kB' 'SwapCached: 0 kB' 'Active: 7887096 kB' 'Inactive: 3689320 kB' 'Active(anon): 7469416 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460504 kB' 'Mapped: 165796 kB' 'Shmem: 7012236 kB' 'KReclaimable: 216908 kB' 'Slab: 913896 kB' 'SReclaimable: 216908 kB' 'SUnreclaim: 696988 kB' 'KernelStack: 21888 kB' 'PageTables: 7648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8617592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214480 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:44.303 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.304 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.304 10:41:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.565 10:41:33 -- setup/common.sh@33 -- # echo 0 00:03:44.565 10:41:33 -- setup/common.sh@33 -- # return 0 00:03:44.565 10:41:33 -- setup/hugepages.sh@99 -- # surp=0 00:03:44.565 10:41:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.565 10:41:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.565 10:41:33 -- setup/common.sh@18 -- # local node= 00:03:44.565 10:41:33 -- setup/common.sh@19 -- # local var val 00:03:44.565 10:41:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.565 10:41:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.565 10:41:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.565 10:41:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.565 10:41:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.565 10:41:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.565 10:41:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43143984 kB' 'MemAvailable: 46862572 kB' 'Buffers: 9316 kB' 'Cached: 11109932 kB' 'SwapCached: 0 kB' 'Active: 7886648 kB' 'Inactive: 3689320 kB' 'Active(anon): 7468968 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459980 kB' 'Mapped: 165796 kB' 'Shmem: 7012248 kB' 'KReclaimable: 216908 kB' 'Slab: 913912 kB' 'SReclaimable: 216908 kB' 'SUnreclaim: 697004 kB' 'KernelStack: 21824 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8617604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214480 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.565 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.565 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.566 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.566 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.567 10:41:33 -- setup/common.sh@33 -- # echo 0 00:03:44.567 10:41:33 -- setup/common.sh@33 -- # return 0 00:03:44.567 10:41:33 -- setup/hugepages.sh@100 -- # resv=0 00:03:44.567 10:41:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.567 nr_hugepages=1024 00:03:44.567 10:41:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.567 resv_hugepages=0 00:03:44.567 10:41:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.567 surplus_hugepages=0 00:03:44.567 10:41:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.567 anon_hugepages=0 00:03:44.567 10:41:33 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.567 10:41:33 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.567 10:41:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.567 10:41:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.567 10:41:33 -- setup/common.sh@18 -- # local node= 00:03:44.567 10:41:33 -- setup/common.sh@19 -- # local var val 00:03:44.567 10:41:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.567 10:41:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.567 10:41:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.567 10:41:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.567 10:41:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.567 10:41:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283784 kB' 'MemFree: 43144088 kB' 'MemAvailable: 46862676 kB' 'Buffers: 9316 kB' 'Cached: 11109948 kB' 'SwapCached: 0 kB' 'Active: 7886888 kB' 'Inactive: 3689320 kB' 'Active(anon): 7469208 kB' 'Inactive(anon): 0 kB' 'Active(file): 417680 kB' 'Inactive(file): 3689320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460256 kB' 'Mapped: 165780 kB' 'Shmem: 7012264 kB' 'KReclaimable: 216908 kB' 'Slab: 913912 kB' 'SReclaimable: 216908 kB' 'SUnreclaim: 697004 kB' 'KernelStack: 21760 kB' 'PageTables: 7140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 8616104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214400 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 507252 kB' 'DirectMap2M: 11761664 kB' 'DirectMap1G: 57671680 kB' 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.567 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.567 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.568 10:41:33 -- setup/common.sh@33 -- # echo 1024 00:03:44.568 10:41:33 -- setup/common.sh@33 -- # return 0 00:03:44.568 10:41:33 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.568 10:41:33 -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.568 10:41:33 -- setup/hugepages.sh@27 -- # local node 00:03:44.568 10:41:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.568 10:41:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.568 10:41:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.568 10:41:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:44.568 10:41:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.568 10:41:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.568 10:41:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.568 10:41:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.568 10:41:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.568 10:41:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.568 10:41:33 -- setup/common.sh@18 -- # local node=0 00:03:44.568 10:41:33 -- setup/common.sh@19 -- # local var val 00:03:44.568 10:41:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.568 10:41:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.568 10:41:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.568 10:41:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.568 10:41:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.568 10:41:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 25991312 kB' 'MemUsed: 6594056 kB' 'SwapCached: 0 kB' 'Active: 2893876 kB' 'Inactive: 176724 kB' 'Active(anon): 2707708 kB' 'Inactive(anon): 0 kB' 'Active(file): 186168 kB' 'Inactive(file): 176724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2682184 kB' 'Mapped: 70636 kB' 'AnonPages: 391636 kB' 'Shmem: 2319292 kB' 'KernelStack: 12584 kB' 'PageTables: 5220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 78624 kB' 'Slab: 391520 kB' 'SReclaimable: 78624 kB' 'SUnreclaim: 312896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.568 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.568 10:41:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # continue 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.569 10:41:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.569 10:41:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.569 10:41:33 -- setup/common.sh@33 -- # echo 0 00:03:44.569 10:41:33 -- setup/common.sh@33 -- # return 0 00:03:44.569 10:41:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.569 10:41:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.569 10:41:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.569 10:41:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.569 10:41:33 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.569 node0=1024 expecting 1024 00:03:44.569 10:41:33 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.569 00:03:44.569 real 0m7.013s 00:03:44.569 user 0m2.615s 00:03:44.569 sys 0m4.513s 00:03:44.569 10:41:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:44.569 10:41:33 -- common/autotest_common.sh@10 -- # set +x 00:03:44.569 ************************************ 00:03:44.569 END TEST no_shrink_alloc 00:03:44.569 ************************************ 00:03:44.569 10:41:33 -- setup/hugepages.sh@217 -- # clear_hp 00:03:44.569 10:41:33 -- setup/hugepages.sh@37 -- # local node hp 00:03:44.569 10:41:33 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:44.569 10:41:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.569 10:41:33 -- setup/hugepages.sh@41 -- # echo 0 00:03:44.569 10:41:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.569 10:41:33 -- setup/hugepages.sh@41 -- # echo 0 00:03:44.569 10:41:33 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:44.569 10:41:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.569 10:41:33 -- setup/hugepages.sh@41 -- # echo 0 00:03:44.569 10:41:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.569 10:41:33 -- setup/hugepages.sh@41 -- # echo 0 00:03:44.569 10:41:33 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:44.569 10:41:33 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:44.569 00:03:44.569 real 0m26.792s 00:03:44.569 user 0m9.556s 00:03:44.569 sys 0m16.240s 00:03:44.569 10:41:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:44.569 10:41:33 -- common/autotest_common.sh@10 -- # set +x 00:03:44.569 ************************************ 00:03:44.569 END TEST hugepages 00:03:44.569 ************************************ 00:03:44.569 10:41:33 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:44.569 10:41:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:44.569 10:41:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:44.569 10:41:33 -- common/autotest_common.sh@10 -- # set +x 00:03:44.569 ************************************ 00:03:44.569 START TEST driver 00:03:44.569 ************************************ 00:03:44.569 10:41:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:44.569 * Looking for test storage... 00:03:44.828 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:44.828 10:41:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:44.828 10:41:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:44.828 10:41:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:44.828 10:41:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:44.828 10:41:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:44.828 10:41:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:44.828 10:41:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:44.828 10:41:33 -- scripts/common.sh@335 -- # IFS=.-: 00:03:44.828 10:41:33 -- scripts/common.sh@335 -- # read -ra ver1 00:03:44.828 10:41:33 -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.828 10:41:33 -- scripts/common.sh@336 -- # read -ra ver2 00:03:44.828 10:41:33 -- scripts/common.sh@337 -- # local 'op=<' 00:03:44.828 10:41:33 -- scripts/common.sh@339 -- # ver1_l=2 00:03:44.828 10:41:33 -- scripts/common.sh@340 -- # ver2_l=1 00:03:44.828 10:41:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:44.828 10:41:33 -- scripts/common.sh@343 -- # case "$op" in 00:03:44.828 10:41:33 -- scripts/common.sh@344 -- # : 1 00:03:44.828 10:41:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:44.828 10:41:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.828 10:41:33 -- scripts/common.sh@364 -- # decimal 1 00:03:44.828 10:41:33 -- scripts/common.sh@352 -- # local d=1 00:03:44.828 10:41:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.828 10:41:33 -- scripts/common.sh@354 -- # echo 1 00:03:44.828 10:41:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:44.828 10:41:33 -- scripts/common.sh@365 -- # decimal 2 00:03:44.828 10:41:33 -- scripts/common.sh@352 -- # local d=2 00:03:44.828 10:41:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.828 10:41:33 -- scripts/common.sh@354 -- # echo 2 00:03:44.828 10:41:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:44.828 10:41:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:44.828 10:41:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:44.828 10:41:33 -- scripts/common.sh@367 -- # return 0 00:03:44.828 10:41:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.828 10:41:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:44.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.828 --rc genhtml_branch_coverage=1 00:03:44.828 --rc genhtml_function_coverage=1 00:03:44.828 --rc genhtml_legend=1 00:03:44.828 --rc geninfo_all_blocks=1 00:03:44.828 --rc geninfo_unexecuted_blocks=1 00:03:44.828 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:44.828 ' 00:03:44.828 10:41:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:44.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.828 --rc genhtml_branch_coverage=1 00:03:44.828 --rc genhtml_function_coverage=1 00:03:44.828 --rc genhtml_legend=1 00:03:44.828 --rc geninfo_all_blocks=1 00:03:44.828 --rc geninfo_unexecuted_blocks=1 00:03:44.828 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:44.828 ' 00:03:44.829 10:41:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:44.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.829 --rc genhtml_branch_coverage=1 00:03:44.829 --rc genhtml_function_coverage=1 00:03:44.829 --rc genhtml_legend=1 00:03:44.829 --rc geninfo_all_blocks=1 00:03:44.829 --rc geninfo_unexecuted_blocks=1 00:03:44.829 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:44.829 ' 00:03:44.829 10:41:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:44.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.829 --rc genhtml_branch_coverage=1 00:03:44.829 --rc genhtml_function_coverage=1 00:03:44.829 --rc genhtml_legend=1 00:03:44.829 --rc geninfo_all_blocks=1 00:03:44.829 --rc geninfo_unexecuted_blocks=1 00:03:44.829 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:44.829 ' 00:03:44.829 10:41:33 -- setup/driver.sh@68 -- # setup reset 00:03:44.829 10:41:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.829 10:41:33 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.151 10:41:38 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:50.151 10:41:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.151 10:41:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.152 10:41:38 -- common/autotest_common.sh@10 -- # set +x 00:03:50.152 ************************************ 00:03:50.152 START TEST guess_driver 00:03:50.152 ************************************ 00:03:50.152 10:41:38 -- common/autotest_common.sh@1114 -- # guess_driver 00:03:50.152 10:41:38 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:50.152 10:41:38 -- setup/driver.sh@47 -- # local fail=0 00:03:50.152 10:41:38 -- setup/driver.sh@49 -- # pick_driver 00:03:50.152 10:41:38 -- setup/driver.sh@36 -- # vfio 00:03:50.152 10:41:38 -- setup/driver.sh@21 -- # local iommu_grups 00:03:50.152 10:41:38 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:50.152 10:41:38 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:50.152 10:41:38 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:50.152 10:41:38 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:50.152 10:41:38 -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:50.152 10:41:38 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:50.152 10:41:38 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:50.152 10:41:38 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:50.152 10:41:38 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:50.152 10:41:38 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:50.152 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:50.152 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:50.152 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:50.152 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:50.152 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:50.152 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:50.152 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:50.152 10:41:38 -- setup/driver.sh@30 -- # return 0 00:03:50.152 10:41:38 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:50.152 10:41:38 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:50.152 10:41:38 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:50.152 10:41:38 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:50.152 Looking for driver=vfio-pci 00:03:50.152 10:41:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.152 10:41:38 -- setup/driver.sh@45 -- # setup output config 00:03:50.152 10:41:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.152 10:41:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:53.441 10:41:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.441 10:41:41 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.441 10:41:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.441 10:41:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.441 10:41:41 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.441 10:41:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.441 10:41:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.441 10:41:41 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.441 10:41:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.441 10:41:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.441 10:41:41 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.441 10:41:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.441 10:41:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.441 10:41:41 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.441 10:41:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.441 10:41:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.441 10:41:41 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.441 10:41:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.441 10:41:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.441 10:41:41 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.441 10:41:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.441 10:41:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.441 10:41:41 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.441 10:41:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.441 10:41:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.441 10:41:41 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.441 10:41:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.441 10:41:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.441 10:41:41 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.441 10:41:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.441 10:41:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.441 10:41:41 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.441 10:41:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.441 10:41:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.441 10:41:41 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.441 10:41:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.441 10:41:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.441 10:41:41 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.441 10:41:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.441 10:41:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.441 10:41:41 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.441 10:41:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.441 10:41:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.441 10:41:41 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.441 10:41:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.441 10:41:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.441 10:41:41 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.441 10:41:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.819 10:41:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.819 10:41:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.819 10:41:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.819 10:41:43 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:54.819 10:41:43 -- setup/driver.sh@65 -- # setup reset 00:03:54.819 10:41:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.819 10:41:43 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.010 00:03:59.010 real 0m9.583s 00:03:59.010 user 0m2.536s 00:03:59.010 sys 0m4.836s 00:03:59.010 10:41:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:59.010 10:41:47 -- common/autotest_common.sh@10 -- # set +x 00:03:59.010 ************************************ 00:03:59.010 END TEST guess_driver 00:03:59.010 ************************************ 00:03:59.269 00:03:59.269 real 0m14.553s 00:03:59.269 user 0m3.986s 00:03:59.269 sys 0m7.609s 00:03:59.269 10:41:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:59.269 10:41:48 -- common/autotest_common.sh@10 -- # set +x 00:03:59.269 ************************************ 00:03:59.269 END TEST driver 00:03:59.269 ************************************ 00:03:59.269 10:41:48 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:59.269 10:41:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.269 10:41:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.269 10:41:48 -- common/autotest_common.sh@10 -- # set +x 00:03:59.269 ************************************ 00:03:59.269 START TEST devices 00:03:59.269 ************************************ 00:03:59.270 10:41:48 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:59.270 * Looking for test storage... 00:03:59.270 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:59.270 10:41:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:59.270 10:41:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:59.270 10:41:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:59.270 10:41:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:59.270 10:41:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:59.270 10:41:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:59.270 10:41:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:59.270 10:41:48 -- scripts/common.sh@335 -- # IFS=.-: 00:03:59.270 10:41:48 -- scripts/common.sh@335 -- # read -ra ver1 00:03:59.270 10:41:48 -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.270 10:41:48 -- scripts/common.sh@336 -- # read -ra ver2 00:03:59.270 10:41:48 -- scripts/common.sh@337 -- # local 'op=<' 00:03:59.270 10:41:48 -- scripts/common.sh@339 -- # ver1_l=2 00:03:59.270 10:41:48 -- scripts/common.sh@340 -- # ver2_l=1 00:03:59.270 10:41:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:59.270 10:41:48 -- scripts/common.sh@343 -- # case "$op" in 00:03:59.270 10:41:48 -- scripts/common.sh@344 -- # : 1 00:03:59.270 10:41:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:59.270 10:41:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.270 10:41:48 -- scripts/common.sh@364 -- # decimal 1 00:03:59.270 10:41:48 -- scripts/common.sh@352 -- # local d=1 00:03:59.270 10:41:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.270 10:41:48 -- scripts/common.sh@354 -- # echo 1 00:03:59.270 10:41:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:59.270 10:41:48 -- scripts/common.sh@365 -- # decimal 2 00:03:59.270 10:41:48 -- scripts/common.sh@352 -- # local d=2 00:03:59.270 10:41:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.270 10:41:48 -- scripts/common.sh@354 -- # echo 2 00:03:59.270 10:41:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:59.270 10:41:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:59.270 10:41:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:59.270 10:41:48 -- scripts/common.sh@367 -- # return 0 00:03:59.270 10:41:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.270 10:41:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:59.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.270 --rc genhtml_branch_coverage=1 00:03:59.270 --rc genhtml_function_coverage=1 00:03:59.270 --rc genhtml_legend=1 00:03:59.270 --rc geninfo_all_blocks=1 00:03:59.270 --rc geninfo_unexecuted_blocks=1 00:03:59.270 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:59.270 ' 00:03:59.270 10:41:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:59.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.270 --rc genhtml_branch_coverage=1 00:03:59.270 --rc genhtml_function_coverage=1 00:03:59.270 --rc genhtml_legend=1 00:03:59.270 --rc geninfo_all_blocks=1 00:03:59.270 --rc geninfo_unexecuted_blocks=1 00:03:59.270 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:59.270 ' 00:03:59.270 10:41:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:59.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.270 --rc genhtml_branch_coverage=1 00:03:59.270 --rc genhtml_function_coverage=1 00:03:59.270 --rc genhtml_legend=1 00:03:59.270 --rc geninfo_all_blocks=1 00:03:59.270 --rc geninfo_unexecuted_blocks=1 00:03:59.270 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:59.270 ' 00:03:59.270 10:41:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:59.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.270 --rc genhtml_branch_coverage=1 00:03:59.270 --rc genhtml_function_coverage=1 00:03:59.270 --rc genhtml_legend=1 00:03:59.270 --rc geninfo_all_blocks=1 00:03:59.270 --rc geninfo_unexecuted_blocks=1 00:03:59.270 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:59.270 ' 00:03:59.270 10:41:48 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:59.270 10:41:48 -- setup/devices.sh@192 -- # setup reset 00:03:59.270 10:41:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.270 10:41:48 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.465 10:41:51 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:03.465 10:41:51 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:03.465 10:41:51 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:03.465 10:41:51 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:03.465 10:41:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:03.465 10:41:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:03.465 10:41:51 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:03.465 10:41:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:03.465 10:41:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:03.465 10:41:51 -- setup/devices.sh@196 -- # blocks=() 00:04:03.465 10:41:51 -- setup/devices.sh@196 -- # declare -a blocks 00:04:03.465 10:41:51 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:03.465 10:41:51 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:03.465 10:41:51 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:03.465 10:41:51 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:03.465 10:41:51 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:03.465 10:41:51 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:03.465 10:41:51 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:03.465 10:41:51 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:03.465 10:41:51 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:03.465 10:41:51 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:03.465 10:41:51 -- scripts/common.sh@389 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:03.465 No valid GPT data, bailing 00:04:03.465 10:41:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:03.465 10:41:51 -- scripts/common.sh@393 -- # pt= 00:04:03.465 10:41:51 -- scripts/common.sh@394 -- # return 1 00:04:03.465 10:41:51 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:03.465 10:41:51 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:03.465 10:41:51 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:03.465 10:41:51 -- setup/common.sh@80 -- # echo 1600321314816 00:04:03.465 10:41:51 -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:03.465 10:41:51 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:03.465 10:41:51 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:03.465 10:41:51 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:03.465 10:41:51 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:03.465 10:41:51 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:03.465 10:41:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.465 10:41:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.465 10:41:51 -- common/autotest_common.sh@10 -- # set +x 00:04:03.466 ************************************ 00:04:03.466 START TEST nvme_mount 00:04:03.466 ************************************ 00:04:03.466 10:41:51 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:03.466 10:41:51 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:03.466 10:41:51 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:03.466 10:41:51 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.466 10:41:51 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:03.466 10:41:51 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:03.466 10:41:51 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:03.466 10:41:51 -- setup/common.sh@40 -- # local part_no=1 00:04:03.466 10:41:51 -- setup/common.sh@41 -- # local size=1073741824 00:04:03.466 10:41:51 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:03.466 10:41:51 -- setup/common.sh@44 -- # parts=() 00:04:03.466 10:41:51 -- setup/common.sh@44 -- # local parts 00:04:03.466 10:41:51 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:03.466 10:41:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.466 10:41:51 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:03.466 10:41:51 -- setup/common.sh@46 -- # (( part++ )) 00:04:03.466 10:41:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.466 10:41:51 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:03.466 10:41:51 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:03.466 10:41:51 -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:04.035 Creating new GPT entries in memory. 00:04:04.035 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:04.035 other utilities. 00:04:04.035 10:41:52 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:04.035 10:41:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.035 10:41:52 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:04.035 10:41:52 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:04.035 10:41:52 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:04.972 Creating new GPT entries in memory. 00:04:04.972 The operation has completed successfully. 00:04:04.972 10:41:53 -- setup/common.sh@57 -- # (( part++ )) 00:04:04.972 10:41:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.972 10:41:53 -- setup/common.sh@62 -- # wait 1265667 00:04:04.973 10:41:53 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.973 10:41:53 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:04.973 10:41:53 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.973 10:41:53 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:04.973 10:41:53 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:04.973 10:41:53 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.973 10:41:53 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.973 10:41:53 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:04.973 10:41:53 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:04.973 10:41:53 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.973 10:41:53 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.973 10:41:53 -- setup/devices.sh@53 -- # local found=0 00:04:04.973 10:41:53 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:04.973 10:41:53 -- setup/devices.sh@56 -- # : 00:04:04.973 10:41:53 -- setup/devices.sh@59 -- # local pci status 00:04:04.973 10:41:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.973 10:41:53 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:04.973 10:41:53 -- setup/devices.sh@47 -- # setup output config 00:04:04.973 10:41:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.973 10:41:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:08.371 10:41:57 -- setup/devices.sh@63 -- # found=1 00:04:08.371 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.371 10:41:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.371 10:41:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:08.371 10:41:57 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:08.371 10:41:57 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.371 10:41:57 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:08.371 10:41:57 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:08.371 10:41:57 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:08.371 10:41:57 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.371 10:41:57 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.371 10:41:57 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:08.371 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:08.371 10:41:57 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:08.371 10:41:57 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:08.630 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:08.631 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:08.631 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:08.631 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:08.631 10:41:57 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:08.631 10:41:57 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:08.631 10:41:57 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.890 10:41:57 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:08.890 10:41:57 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:08.890 10:41:57 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.890 10:41:57 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:08.890 10:41:57 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:08.890 10:41:57 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:08.890 10:41:57 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.890 10:41:57 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:08.890 10:41:57 -- setup/devices.sh@53 -- # local found=0 00:04:08.890 10:41:57 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:08.890 10:41:57 -- setup/devices.sh@56 -- # : 00:04:08.890 10:41:57 -- setup/devices.sh@59 -- # local pci status 00:04:08.890 10:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.890 10:41:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:08.890 10:41:57 -- setup/devices.sh@47 -- # setup output config 00:04:08.890 10:41:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.890 10:41:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:12.178 10:42:00 -- setup/devices.sh@63 -- # found=1 00:04:12.178 10:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.178 10:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.178 10:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.178 10:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.178 10:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.178 10:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.178 10:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.178 10:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.178 10:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.178 10:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.178 10:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.178 10:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.178 10:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.178 10:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.178 10:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.178 10:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.178 10:42:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.178 10:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.178 10:42:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.179 10:42:01 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:12.179 10:42:01 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.179 10:42:01 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.179 10:42:01 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.179 10:42:01 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.179 10:42:01 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:12.179 10:42:01 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:12.179 10:42:01 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:12.179 10:42:01 -- setup/devices.sh@50 -- # local mount_point= 00:04:12.179 10:42:01 -- setup/devices.sh@51 -- # local test_file= 00:04:12.179 10:42:01 -- setup/devices.sh@53 -- # local found=0 00:04:12.179 10:42:01 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:12.179 10:42:01 -- setup/devices.sh@59 -- # local pci status 00:04:12.179 10:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.179 10:42:01 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:12.179 10:42:01 -- setup/devices.sh@47 -- # setup output config 00:04:12.179 10:42:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.179 10:42:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:15.468 10:42:04 -- setup/devices.sh@63 -- # found=1 00:04:15.468 10:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.468 10:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.468 10:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.468 10:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.468 10:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.468 10:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.468 10:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.468 10:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.468 10:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.468 10:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.468 10:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.468 10:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.468 10:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.468 10:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.468 10:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.468 10:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.468 10:42:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.468 10:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.728 10:42:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.728 10:42:04 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:15.728 10:42:04 -- setup/devices.sh@68 -- # return 0 00:04:15.728 10:42:04 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:15.728 10:42:04 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.728 10:42:04 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.728 10:42:04 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.728 10:42:04 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:15.728 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:15.728 00:04:15.728 real 0m12.872s 00:04:15.728 user 0m3.824s 00:04:15.728 sys 0m7.013s 00:04:15.728 10:42:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:15.728 10:42:04 -- common/autotest_common.sh@10 -- # set +x 00:04:15.728 ************************************ 00:04:15.728 END TEST nvme_mount 00:04:15.728 ************************************ 00:04:15.728 10:42:04 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:15.728 10:42:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.728 10:42:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.728 10:42:04 -- common/autotest_common.sh@10 -- # set +x 00:04:15.728 ************************************ 00:04:15.728 START TEST dm_mount 00:04:15.728 ************************************ 00:04:15.728 10:42:04 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:15.728 10:42:04 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:15.728 10:42:04 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:15.728 10:42:04 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:15.728 10:42:04 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:15.728 10:42:04 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:15.728 10:42:04 -- setup/common.sh@40 -- # local part_no=2 00:04:15.728 10:42:04 -- setup/common.sh@41 -- # local size=1073741824 00:04:15.728 10:42:04 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:15.728 10:42:04 -- setup/common.sh@44 -- # parts=() 00:04:15.728 10:42:04 -- setup/common.sh@44 -- # local parts 00:04:15.728 10:42:04 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:15.728 10:42:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.728 10:42:04 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:15.728 10:42:04 -- setup/common.sh@46 -- # (( part++ )) 00:04:15.728 10:42:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.728 10:42:04 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:15.728 10:42:04 -- setup/common.sh@46 -- # (( part++ )) 00:04:15.728 10:42:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.728 10:42:04 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:15.728 10:42:04 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:15.728 10:42:04 -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:17.107 Creating new GPT entries in memory. 00:04:17.107 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:17.107 other utilities. 00:04:17.107 10:42:05 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:17.107 10:42:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.107 10:42:05 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:17.107 10:42:05 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:17.107 10:42:05 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:18.044 Creating new GPT entries in memory. 00:04:18.044 The operation has completed successfully. 00:04:18.044 10:42:06 -- setup/common.sh@57 -- # (( part++ )) 00:04:18.044 10:42:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.044 10:42:06 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:18.044 10:42:06 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:18.044 10:42:06 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:18.986 The operation has completed successfully. 00:04:18.986 10:42:07 -- setup/common.sh@57 -- # (( part++ )) 00:04:18.986 10:42:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.986 10:42:07 -- setup/common.sh@62 -- # wait 1270581 00:04:18.986 10:42:07 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:18.986 10:42:07 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:18.986 10:42:07 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:18.986 10:42:07 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:18.986 10:42:07 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:18.986 10:42:07 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:18.986 10:42:07 -- setup/devices.sh@161 -- # break 00:04:18.986 10:42:07 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:18.986 10:42:07 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:18.986 10:42:07 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:18.986 10:42:07 -- setup/devices.sh@166 -- # dm=dm-0 00:04:18.986 10:42:07 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:18.986 10:42:07 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:18.986 10:42:07 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:18.986 10:42:07 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:18.986 10:42:07 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:18.986 10:42:07 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:18.986 10:42:07 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:18.986 10:42:07 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:18.986 10:42:07 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:18.986 10:42:07 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:18.986 10:42:07 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:18.986 10:42:07 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:18.986 10:42:07 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:18.986 10:42:07 -- setup/devices.sh@53 -- # local found=0 00:04:18.986 10:42:07 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:18.986 10:42:07 -- setup/devices.sh@56 -- # : 00:04:18.986 10:42:07 -- setup/devices.sh@59 -- # local pci status 00:04:18.986 10:42:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.986 10:42:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:18.986 10:42:07 -- setup/devices.sh@47 -- # setup output config 00:04:18.986 10:42:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.986 10:42:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:22.278 10:42:10 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.279 10:42:10 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:22.279 10:42:10 -- setup/devices.sh@63 -- # found=1 00:04:22.279 10:42:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.279 10:42:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.279 10:42:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.279 10:42:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.279 10:42:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.279 10:42:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.279 10:42:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.279 10:42:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.279 10:42:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.279 10:42:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.279 10:42:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.279 10:42:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.279 10:42:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.279 10:42:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.279 10:42:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.279 10:42:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.279 10:42:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.279 10:42:11 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:22.279 10:42:11 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:22.279 10:42:11 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:22.279 10:42:11 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:22.279 10:42:11 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:22.279 10:42:11 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:22.279 10:42:11 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:22.279 10:42:11 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:22.279 10:42:11 -- setup/devices.sh@50 -- # local mount_point= 00:04:22.279 10:42:11 -- setup/devices.sh@51 -- # local test_file= 00:04:22.279 10:42:11 -- setup/devices.sh@53 -- # local found=0 00:04:22.279 10:42:11 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:22.279 10:42:11 -- setup/devices.sh@59 -- # local pci status 00:04:22.279 10:42:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.279 10:42:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:22.279 10:42:11 -- setup/devices.sh@47 -- # setup output config 00:04:22.279 10:42:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.279 10:42:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:25.570 10:42:14 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.570 10:42:14 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:25.570 10:42:14 -- setup/devices.sh@63 -- # found=1 00:04:25.570 10:42:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.570 10:42:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.570 10:42:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.570 10:42:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.570 10:42:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.570 10:42:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.570 10:42:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.570 10:42:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.570 10:42:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.570 10:42:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.570 10:42:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.570 10:42:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.570 10:42:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.570 10:42:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.570 10:42:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.570 10:42:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.570 10:42:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.570 10:42:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.570 10:42:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.570 10:42:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.570 10:42:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.570 10:42:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.570 10:42:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.570 10:42:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.571 10:42:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.571 10:42:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.571 10:42:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.571 10:42:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.571 10:42:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.571 10:42:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.571 10:42:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.571 10:42:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.571 10:42:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.571 10:42:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.571 10:42:14 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:25.571 10:42:14 -- setup/devices.sh@68 -- # return 0 00:04:25.571 10:42:14 -- setup/devices.sh@187 -- # cleanup_dm 00:04:25.571 10:42:14 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:25.571 10:42:14 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:25.571 10:42:14 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:25.571 10:42:14 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.571 10:42:14 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:25.571 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:25.571 10:42:14 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:25.571 10:42:14 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:25.571 00:04:25.571 real 0m9.850s 00:04:25.571 user 0m2.444s 00:04:25.571 sys 0m4.510s 00:04:25.571 10:42:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:25.571 10:42:14 -- common/autotest_common.sh@10 -- # set +x 00:04:25.571 ************************************ 00:04:25.571 END TEST dm_mount 00:04:25.571 ************************************ 00:04:25.571 10:42:14 -- setup/devices.sh@1 -- # cleanup 00:04:25.571 10:42:14 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:25.571 10:42:14 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.830 10:42:14 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.830 10:42:14 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:25.830 10:42:14 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.830 10:42:14 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:26.090 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:26.090 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:26.090 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:26.090 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:26.090 10:42:14 -- setup/devices.sh@12 -- # cleanup_dm 00:04:26.090 10:42:14 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:26.090 10:42:14 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:26.090 10:42:14 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.090 10:42:14 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:26.090 10:42:14 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.090 10:42:14 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:26.090 00:04:26.090 real 0m26.800s 00:04:26.090 user 0m7.664s 00:04:26.090 sys 0m14.093s 00:04:26.090 10:42:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.090 10:42:14 -- common/autotest_common.sh@10 -- # set +x 00:04:26.090 ************************************ 00:04:26.090 END TEST devices 00:04:26.090 ************************************ 00:04:26.090 00:04:26.090 real 1m32.446s 00:04:26.090 user 0m29.190s 00:04:26.090 sys 0m52.538s 00:04:26.090 10:42:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.090 10:42:14 -- common/autotest_common.sh@10 -- # set +x 00:04:26.090 ************************************ 00:04:26.090 END TEST setup.sh 00:04:26.090 ************************************ 00:04:26.090 10:42:14 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:29.378 Hugepages 00:04:29.378 node hugesize free / total 00:04:29.378 node0 1048576kB 0 / 0 00:04:29.378 node0 2048kB 2048 / 2048 00:04:29.378 node1 1048576kB 0 / 0 00:04:29.378 node1 2048kB 0 / 0 00:04:29.378 00:04:29.378 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:29.378 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:29.378 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:29.378 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:29.378 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:29.378 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:29.378 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:29.378 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:29.378 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:29.378 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:29.378 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:29.378 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:29.378 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:29.378 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:29.378 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:29.378 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:29.378 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:29.378 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:29.378 10:42:18 -- spdk/autotest.sh@128 -- # uname -s 00:04:29.378 10:42:18 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:29.378 10:42:18 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:29.378 10:42:18 -- common/autotest_common.sh@1526 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:32.670 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:32.670 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:32.670 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:32.670 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:32.670 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:32.670 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:32.670 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:32.670 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:32.670 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:32.671 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:32.671 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:32.671 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:32.671 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:32.671 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:32.671 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:32.671 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:34.577 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:34.577 10:42:23 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:35.515 10:42:24 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:35.515 10:42:24 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:35.515 10:42:24 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:35.515 10:42:24 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:35.515 10:42:24 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:35.515 10:42:24 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:35.515 10:42:24 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:35.515 10:42:24 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:35.515 10:42:24 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:35.515 10:42:24 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:04:35.515 10:42:24 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:04:35.515 10:42:24 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.806 Waiting for block devices as requested 00:04:38.806 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:38.806 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:38.806 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:38.806 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:38.806 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:38.806 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:39.064 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:39.064 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:39.064 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:39.323 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:39.323 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:39.323 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:39.582 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:39.582 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:39.582 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:39.841 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:39.841 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:40.100 10:42:28 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:40.100 10:42:28 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:40.100 10:42:28 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:04:40.100 10:42:28 -- common/autotest_common.sh@1497 -- # grep 0000:d8:00.0/nvme/nvme 00:04:40.100 10:42:28 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:40.100 10:42:28 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:40.100 10:42:28 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:40.100 10:42:28 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:40.100 10:42:28 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:40.100 10:42:28 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:40.100 10:42:28 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:40.100 10:42:28 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:40.100 10:42:28 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:40.100 10:42:28 -- common/autotest_common.sh@1540 -- # oacs=' 0xe' 00:04:40.100 10:42:28 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:40.100 10:42:28 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:40.100 10:42:28 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:40.100 10:42:28 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:40.100 10:42:28 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:40.100 10:42:28 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:40.100 10:42:28 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:40.100 10:42:28 -- common/autotest_common.sh@1552 -- # continue 00:04:40.100 10:42:28 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:40.100 10:42:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:40.100 10:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:40.100 10:42:29 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:40.100 10:42:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:40.100 10:42:29 -- common/autotest_common.sh@10 -- # set +x 00:04:40.100 10:42:29 -- spdk/autotest.sh@137 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:43.389 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:43.389 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:43.389 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:43.389 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:43.389 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:43.389 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:43.389 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:43.389 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:43.389 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:43.389 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:43.389 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:43.389 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:43.389 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:43.389 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:43.389 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:43.389 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:45.294 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:45.294 10:42:33 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:45.294 10:42:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.294 10:42:33 -- common/autotest_common.sh@10 -- # set +x 00:04:45.294 10:42:33 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:45.294 10:42:33 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:45.294 10:42:33 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:45.294 10:42:33 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:45.294 10:42:33 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:45.294 10:42:33 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:45.294 10:42:33 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:45.294 10:42:33 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:45.294 10:42:33 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.294 10:42:33 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:45.294 10:42:33 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:45.294 10:42:34 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:04:45.294 10:42:34 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:04:45.294 10:42:34 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:45.294 10:42:34 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:45.294 10:42:34 -- common/autotest_common.sh@1575 -- # device=0x0a54 00:04:45.294 10:42:34 -- common/autotest_common.sh@1576 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:45.294 10:42:34 -- common/autotest_common.sh@1577 -- # bdfs+=($bdf) 00:04:45.294 10:42:34 -- common/autotest_common.sh@1581 -- # printf '%s\n' 0000:d8:00.0 00:04:45.294 10:42:34 -- common/autotest_common.sh@1587 -- # [[ -z 0000:d8:00.0 ]] 00:04:45.294 10:42:34 -- common/autotest_common.sh@1592 -- # spdk_tgt_pid=1280409 00:04:45.294 10:42:34 -- common/autotest_common.sh@1593 -- # waitforlisten 1280409 00:04:45.294 10:42:34 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.294 10:42:34 -- common/autotest_common.sh@829 -- # '[' -z 1280409 ']' 00:04:45.294 10:42:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.294 10:42:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.294 10:42:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.294 10:42:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.294 10:42:34 -- common/autotest_common.sh@10 -- # set +x 00:04:45.294 [2024-12-15 10:42:34.135949] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:45.294 [2024-12-15 10:42:34.136037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280409 ] 00:04:45.294 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.294 [2024-12-15 10:42:34.205041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.294 [2024-12-15 10:42:34.281032] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:45.294 [2024-12-15 10:42:34.281144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.230 10:42:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.230 10:42:34 -- common/autotest_common.sh@862 -- # return 0 00:04:46.230 10:42:34 -- common/autotest_common.sh@1595 -- # bdf_id=0 00:04:46.230 10:42:34 -- common/autotest_common.sh@1596 -- # for bdf in "${bdfs[@]}" 00:04:46.230 10:42:34 -- common/autotest_common.sh@1597 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:49.518 nvme0n1 00:04:49.518 10:42:37 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:49.518 [2024-12-15 10:42:38.141662] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:49.518 request: 00:04:49.518 { 00:04:49.518 "nvme_ctrlr_name": "nvme0", 00:04:49.518 "password": "test", 00:04:49.518 "method": "bdev_nvme_opal_revert", 00:04:49.518 "req_id": 1 00:04:49.518 } 00:04:49.518 Got JSON-RPC error response 00:04:49.518 response: 00:04:49.518 { 00:04:49.518 "code": -32602, 00:04:49.518 "message": "Invalid parameters" 00:04:49.518 } 00:04:49.518 10:42:38 -- common/autotest_common.sh@1599 -- # true 00:04:49.518 10:42:38 -- common/autotest_common.sh@1600 -- # (( ++bdf_id )) 00:04:49.518 10:42:38 -- common/autotest_common.sh@1603 -- # killprocess 1280409 00:04:49.518 10:42:38 -- common/autotest_common.sh@936 -- # '[' -z 1280409 ']' 00:04:49.518 10:42:38 -- common/autotest_common.sh@940 -- # kill -0 1280409 00:04:49.518 10:42:38 -- common/autotest_common.sh@941 -- # uname 00:04:49.518 10:42:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:49.518 10:42:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1280409 00:04:49.518 10:42:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:49.518 10:42:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:49.518 10:42:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1280409' 00:04:49.518 killing process with pid 1280409 00:04:49.518 10:42:38 -- common/autotest_common.sh@955 -- # kill 1280409 00:04:49.518 10:42:38 -- common/autotest_common.sh@960 -- # wait 1280409 00:04:51.423 10:42:40 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:51.423 10:42:40 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:51.423 10:42:40 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:51.423 10:42:40 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:51.423 10:42:40 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:51.423 10:42:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.423 10:42:40 -- common/autotest_common.sh@10 -- # set +x 00:04:51.423 10:42:40 -- spdk/autotest.sh@162 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:51.423 10:42:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.423 10:42:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.423 10:42:40 -- common/autotest_common.sh@10 -- # set +x 00:04:51.683 ************************************ 00:04:51.683 START TEST env 00:04:51.683 ************************************ 00:04:51.683 10:42:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:51.683 * Looking for test storage... 00:04:51.683 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:04:51.683 10:42:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:51.683 10:42:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:51.683 10:42:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:51.683 10:42:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:51.683 10:42:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:51.683 10:42:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:51.683 10:42:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:51.683 10:42:40 -- scripts/common.sh@335 -- # IFS=.-: 00:04:51.683 10:42:40 -- scripts/common.sh@335 -- # read -ra ver1 00:04:51.683 10:42:40 -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.683 10:42:40 -- scripts/common.sh@336 -- # read -ra ver2 00:04:51.683 10:42:40 -- scripts/common.sh@337 -- # local 'op=<' 00:04:51.683 10:42:40 -- scripts/common.sh@339 -- # ver1_l=2 00:04:51.683 10:42:40 -- scripts/common.sh@340 -- # ver2_l=1 00:04:51.683 10:42:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:51.683 10:42:40 -- scripts/common.sh@343 -- # case "$op" in 00:04:51.683 10:42:40 -- scripts/common.sh@344 -- # : 1 00:04:51.683 10:42:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:51.683 10:42:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.683 10:42:40 -- scripts/common.sh@364 -- # decimal 1 00:04:51.683 10:42:40 -- scripts/common.sh@352 -- # local d=1 00:04:51.683 10:42:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.683 10:42:40 -- scripts/common.sh@354 -- # echo 1 00:04:51.683 10:42:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:51.683 10:42:40 -- scripts/common.sh@365 -- # decimal 2 00:04:51.683 10:42:40 -- scripts/common.sh@352 -- # local d=2 00:04:51.683 10:42:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.683 10:42:40 -- scripts/common.sh@354 -- # echo 2 00:04:51.683 10:42:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:51.683 10:42:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:51.683 10:42:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:51.683 10:42:40 -- scripts/common.sh@367 -- # return 0 00:04:51.683 10:42:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.683 10:42:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:51.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.683 --rc genhtml_branch_coverage=1 00:04:51.683 --rc genhtml_function_coverage=1 00:04:51.683 --rc genhtml_legend=1 00:04:51.683 --rc geninfo_all_blocks=1 00:04:51.683 --rc geninfo_unexecuted_blocks=1 00:04:51.683 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:51.683 ' 00:04:51.683 10:42:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:51.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.683 --rc genhtml_branch_coverage=1 00:04:51.683 --rc genhtml_function_coverage=1 00:04:51.683 --rc genhtml_legend=1 00:04:51.683 --rc geninfo_all_blocks=1 00:04:51.683 --rc geninfo_unexecuted_blocks=1 00:04:51.683 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:51.683 ' 00:04:51.683 10:42:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:51.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.683 --rc genhtml_branch_coverage=1 00:04:51.683 --rc genhtml_function_coverage=1 00:04:51.683 --rc genhtml_legend=1 00:04:51.683 --rc geninfo_all_blocks=1 00:04:51.683 --rc geninfo_unexecuted_blocks=1 00:04:51.683 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:51.683 ' 00:04:51.683 10:42:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:51.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.683 --rc genhtml_branch_coverage=1 00:04:51.683 --rc genhtml_function_coverage=1 00:04:51.683 --rc genhtml_legend=1 00:04:51.683 --rc geninfo_all_blocks=1 00:04:51.683 --rc geninfo_unexecuted_blocks=1 00:04:51.683 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:51.683 ' 00:04:51.683 10:42:40 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:51.683 10:42:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.683 10:42:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.683 10:42:40 -- common/autotest_common.sh@10 -- # set +x 00:04:51.683 ************************************ 00:04:51.683 START TEST env_memory 00:04:51.683 ************************************ 00:04:51.683 10:42:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:51.683 00:04:51.683 00:04:51.683 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.683 http://cunit.sourceforge.net/ 00:04:51.683 00:04:51.683 00:04:51.683 Suite: memory 00:04:51.684 Test: alloc and free memory map ...[2024-12-15 10:42:40.648286] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:51.684 passed 00:04:51.684 Test: mem map translation ...[2024-12-15 10:42:40.661708] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:51.684 [2024-12-15 10:42:40.661738] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:51.684 [2024-12-15 10:42:40.661768] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:51.684 [2024-12-15 10:42:40.661776] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:51.684 passed 00:04:51.684 Test: mem map registration ...[2024-12-15 10:42:40.682081] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:51.684 [2024-12-15 10:42:40.682096] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:51.684 passed 00:04:51.944 Test: mem map adjacent registrations ...passed 00:04:51.944 00:04:51.944 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.944 suites 1 1 n/a 0 0 00:04:51.944 tests 4 4 4 0 0 00:04:51.944 asserts 152 152 152 0 n/a 00:04:51.944 00:04:51.944 Elapsed time = 0.083 seconds 00:04:51.944 00:04:51.944 real 0m0.096s 00:04:51.944 user 0m0.085s 00:04:51.944 sys 0m0.011s 00:04:51.944 10:42:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.944 10:42:40 -- common/autotest_common.sh@10 -- # set +x 00:04:51.944 ************************************ 00:04:51.944 END TEST env_memory 00:04:51.944 ************************************ 00:04:51.944 10:42:40 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:51.944 10:42:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.944 10:42:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.944 10:42:40 -- common/autotest_common.sh@10 -- # set +x 00:04:51.944 ************************************ 00:04:51.944 START TEST env_vtophys 00:04:51.944 ************************************ 00:04:51.944 10:42:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:51.944 EAL: lib.eal log level changed from notice to debug 00:04:51.944 EAL: Detected lcore 0 as core 0 on socket 0 00:04:51.944 EAL: Detected lcore 1 as core 1 on socket 0 00:04:51.944 EAL: Detected lcore 2 as core 2 on socket 0 00:04:51.944 EAL: Detected lcore 3 as core 3 on socket 0 00:04:51.944 EAL: Detected lcore 4 as core 4 on socket 0 00:04:51.944 EAL: Detected lcore 5 as core 5 on socket 0 00:04:51.944 EAL: Detected lcore 6 as core 6 on socket 0 00:04:51.944 EAL: Detected lcore 7 as core 8 on socket 0 00:04:51.944 EAL: Detected lcore 8 as core 9 on socket 0 00:04:51.944 EAL: Detected lcore 9 as core 10 on socket 0 00:04:51.944 EAL: Detected lcore 10 as core 11 on socket 0 00:04:51.944 EAL: Detected lcore 11 as core 12 on socket 0 00:04:51.944 EAL: Detected lcore 12 as core 13 on socket 0 00:04:51.944 EAL: Detected lcore 13 as core 14 on socket 0 00:04:51.944 EAL: Detected lcore 14 as core 16 on socket 0 00:04:51.944 EAL: Detected lcore 15 as core 17 on socket 0 00:04:51.944 EAL: Detected lcore 16 as core 18 on socket 0 00:04:51.944 EAL: Detected lcore 17 as core 19 on socket 0 00:04:51.944 EAL: Detected lcore 18 as core 20 on socket 0 00:04:51.944 EAL: Detected lcore 19 as core 21 on socket 0 00:04:51.944 EAL: Detected lcore 20 as core 22 on socket 0 00:04:51.944 EAL: Detected lcore 21 as core 24 on socket 0 00:04:51.944 EAL: Detected lcore 22 as core 25 on socket 0 00:04:51.944 EAL: Detected lcore 23 as core 26 on socket 0 00:04:51.944 EAL: Detected lcore 24 as core 27 on socket 0 00:04:51.944 EAL: Detected lcore 25 as core 28 on socket 0 00:04:51.944 EAL: Detected lcore 26 as core 29 on socket 0 00:04:51.944 EAL: Detected lcore 27 as core 30 on socket 0 00:04:51.944 EAL: Detected lcore 28 as core 0 on socket 1 00:04:51.944 EAL: Detected lcore 29 as core 1 on socket 1 00:04:51.944 EAL: Detected lcore 30 as core 2 on socket 1 00:04:51.944 EAL: Detected lcore 31 as core 3 on socket 1 00:04:51.944 EAL: Detected lcore 32 as core 4 on socket 1 00:04:51.944 EAL: Detected lcore 33 as core 5 on socket 1 00:04:51.944 EAL: Detected lcore 34 as core 6 on socket 1 00:04:51.944 EAL: Detected lcore 35 as core 8 on socket 1 00:04:51.944 EAL: Detected lcore 36 as core 9 on socket 1 00:04:51.944 EAL: Detected lcore 37 as core 10 on socket 1 00:04:51.944 EAL: Detected lcore 38 as core 11 on socket 1 00:04:51.944 EAL: Detected lcore 39 as core 12 on socket 1 00:04:51.944 EAL: Detected lcore 40 as core 13 on socket 1 00:04:51.944 EAL: Detected lcore 41 as core 14 on socket 1 00:04:51.944 EAL: Detected lcore 42 as core 16 on socket 1 00:04:51.944 EAL: Detected lcore 43 as core 17 on socket 1 00:04:51.944 EAL: Detected lcore 44 as core 18 on socket 1 00:04:51.944 EAL: Detected lcore 45 as core 19 on socket 1 00:04:51.944 EAL: Detected lcore 46 as core 20 on socket 1 00:04:51.944 EAL: Detected lcore 47 as core 21 on socket 1 00:04:51.944 EAL: Detected lcore 48 as core 22 on socket 1 00:04:51.944 EAL: Detected lcore 49 as core 24 on socket 1 00:04:51.944 EAL: Detected lcore 50 as core 25 on socket 1 00:04:51.944 EAL: Detected lcore 51 as core 26 on socket 1 00:04:51.944 EAL: Detected lcore 52 as core 27 on socket 1 00:04:51.944 EAL: Detected lcore 53 as core 28 on socket 1 00:04:51.944 EAL: Detected lcore 54 as core 29 on socket 1 00:04:51.944 EAL: Detected lcore 55 as core 30 on socket 1 00:04:51.944 EAL: Detected lcore 56 as core 0 on socket 0 00:04:51.944 EAL: Detected lcore 57 as core 1 on socket 0 00:04:51.944 EAL: Detected lcore 58 as core 2 on socket 0 00:04:51.944 EAL: Detected lcore 59 as core 3 on socket 0 00:04:51.944 EAL: Detected lcore 60 as core 4 on socket 0 00:04:51.944 EAL: Detected lcore 61 as core 5 on socket 0 00:04:51.944 EAL: Detected lcore 62 as core 6 on socket 0 00:04:51.944 EAL: Detected lcore 63 as core 8 on socket 0 00:04:51.944 EAL: Detected lcore 64 as core 9 on socket 0 00:04:51.944 EAL: Detected lcore 65 as core 10 on socket 0 00:04:51.944 EAL: Detected lcore 66 as core 11 on socket 0 00:04:51.944 EAL: Detected lcore 67 as core 12 on socket 0 00:04:51.944 EAL: Detected lcore 68 as core 13 on socket 0 00:04:51.944 EAL: Detected lcore 69 as core 14 on socket 0 00:04:51.944 EAL: Detected lcore 70 as core 16 on socket 0 00:04:51.944 EAL: Detected lcore 71 as core 17 on socket 0 00:04:51.944 EAL: Detected lcore 72 as core 18 on socket 0 00:04:51.944 EAL: Detected lcore 73 as core 19 on socket 0 00:04:51.944 EAL: Detected lcore 74 as core 20 on socket 0 00:04:51.944 EAL: Detected lcore 75 as core 21 on socket 0 00:04:51.944 EAL: Detected lcore 76 as core 22 on socket 0 00:04:51.944 EAL: Detected lcore 77 as core 24 on socket 0 00:04:51.944 EAL: Detected lcore 78 as core 25 on socket 0 00:04:51.944 EAL: Detected lcore 79 as core 26 on socket 0 00:04:51.944 EAL: Detected lcore 80 as core 27 on socket 0 00:04:51.944 EAL: Detected lcore 81 as core 28 on socket 0 00:04:51.944 EAL: Detected lcore 82 as core 29 on socket 0 00:04:51.944 EAL: Detected lcore 83 as core 30 on socket 0 00:04:51.944 EAL: Detected lcore 84 as core 0 on socket 1 00:04:51.944 EAL: Detected lcore 85 as core 1 on socket 1 00:04:51.944 EAL: Detected lcore 86 as core 2 on socket 1 00:04:51.944 EAL: Detected lcore 87 as core 3 on socket 1 00:04:51.944 EAL: Detected lcore 88 as core 4 on socket 1 00:04:51.944 EAL: Detected lcore 89 as core 5 on socket 1 00:04:51.944 EAL: Detected lcore 90 as core 6 on socket 1 00:04:51.944 EAL: Detected lcore 91 as core 8 on socket 1 00:04:51.944 EAL: Detected lcore 92 as core 9 on socket 1 00:04:51.944 EAL: Detected lcore 93 as core 10 on socket 1 00:04:51.944 EAL: Detected lcore 94 as core 11 on socket 1 00:04:51.944 EAL: Detected lcore 95 as core 12 on socket 1 00:04:51.944 EAL: Detected lcore 96 as core 13 on socket 1 00:04:51.944 EAL: Detected lcore 97 as core 14 on socket 1 00:04:51.944 EAL: Detected lcore 98 as core 16 on socket 1 00:04:51.944 EAL: Detected lcore 99 as core 17 on socket 1 00:04:51.944 EAL: Detected lcore 100 as core 18 on socket 1 00:04:51.944 EAL: Detected lcore 101 as core 19 on socket 1 00:04:51.944 EAL: Detected lcore 102 as core 20 on socket 1 00:04:51.944 EAL: Detected lcore 103 as core 21 on socket 1 00:04:51.944 EAL: Detected lcore 104 as core 22 on socket 1 00:04:51.944 EAL: Detected lcore 105 as core 24 on socket 1 00:04:51.944 EAL: Detected lcore 106 as core 25 on socket 1 00:04:51.944 EAL: Detected lcore 107 as core 26 on socket 1 00:04:51.944 EAL: Detected lcore 108 as core 27 on socket 1 00:04:51.944 EAL: Detected lcore 109 as core 28 on socket 1 00:04:51.944 EAL: Detected lcore 110 as core 29 on socket 1 00:04:51.944 EAL: Detected lcore 111 as core 30 on socket 1 00:04:51.944 EAL: Maximum logical cores by configuration: 128 00:04:51.944 EAL: Detected CPU lcores: 112 00:04:51.944 EAL: Detected NUMA nodes: 2 00:04:51.944 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:51.944 EAL: Checking presence of .so 'librte_eal.so.24' 00:04:51.944 EAL: Checking presence of .so 'librte_eal.so' 00:04:51.945 EAL: Detected static linkage of DPDK 00:04:51.945 EAL: No shared files mode enabled, IPC will be disabled 00:04:51.945 EAL: Bus pci wants IOVA as 'DC' 00:04:51.945 EAL: Buses did not request a specific IOVA mode. 00:04:51.945 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:51.945 EAL: Selected IOVA mode 'VA' 00:04:51.945 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.945 EAL: Probing VFIO support... 00:04:51.945 EAL: IOMMU type 1 (Type 1) is supported 00:04:51.945 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:51.945 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:51.945 EAL: VFIO support initialized 00:04:51.945 EAL: Ask a virtual area of 0x2e000 bytes 00:04:51.945 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:51.945 EAL: Setting up physically contiguous memory... 00:04:51.945 EAL: Setting maximum number of open files to 524288 00:04:51.945 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:51.945 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:51.945 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:51.945 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.945 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:51.945 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:51.945 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.945 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:51.945 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:51.945 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.945 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:51.945 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:51.945 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.945 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:51.945 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:51.945 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.945 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:51.945 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:51.945 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.945 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:51.945 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:51.945 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.945 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:51.945 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:51.945 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.945 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:51.945 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:51.945 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:51.945 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.945 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:51.945 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:51.945 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.945 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:51.945 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:51.945 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.945 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:51.945 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:51.945 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.945 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:51.945 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:51.945 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.945 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:51.945 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:51.945 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.945 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:51.945 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:51.945 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.945 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:51.945 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:51.945 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.945 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:51.945 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:51.945 EAL: Hugepages will be freed exactly as allocated. 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: TSC frequency is ~2500000 KHz 00:04:51.945 EAL: Main lcore 0 is ready (tid=7fb69aeeba00;cpuset=[0]) 00:04:51.945 EAL: Trying to obtain current memory policy. 00:04:51.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.945 EAL: Restoring previous memory policy: 0 00:04:51.945 EAL: request: mp_malloc_sync 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: Heap on socket 0 was expanded by 2MB 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: Mem event callback 'spdk:(nil)' registered 00:04:51.945 00:04:51.945 00:04:51.945 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.945 http://cunit.sourceforge.net/ 00:04:51.945 00:04:51.945 00:04:51.945 Suite: components_suite 00:04:51.945 Test: vtophys_malloc_test ...passed 00:04:51.945 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:51.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.945 EAL: Restoring previous memory policy: 4 00:04:51.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.945 EAL: request: mp_malloc_sync 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: Heap on socket 0 was expanded by 4MB 00:04:51.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.945 EAL: request: mp_malloc_sync 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: Heap on socket 0 was shrunk by 4MB 00:04:51.945 EAL: Trying to obtain current memory policy. 00:04:51.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.945 EAL: Restoring previous memory policy: 4 00:04:51.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.945 EAL: request: mp_malloc_sync 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: Heap on socket 0 was expanded by 6MB 00:04:51.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.945 EAL: request: mp_malloc_sync 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: Heap on socket 0 was shrunk by 6MB 00:04:51.945 EAL: Trying to obtain current memory policy. 00:04:51.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.945 EAL: Restoring previous memory policy: 4 00:04:51.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.945 EAL: request: mp_malloc_sync 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: Heap on socket 0 was expanded by 10MB 00:04:51.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.945 EAL: request: mp_malloc_sync 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: Heap on socket 0 was shrunk by 10MB 00:04:51.945 EAL: Trying to obtain current memory policy. 00:04:51.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.945 EAL: Restoring previous memory policy: 4 00:04:51.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.945 EAL: request: mp_malloc_sync 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: Heap on socket 0 was expanded by 18MB 00:04:51.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.945 EAL: request: mp_malloc_sync 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: Heap on socket 0 was shrunk by 18MB 00:04:51.945 EAL: Trying to obtain current memory policy. 00:04:51.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.945 EAL: Restoring previous memory policy: 4 00:04:51.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.945 EAL: request: mp_malloc_sync 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: Heap on socket 0 was expanded by 34MB 00:04:51.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.945 EAL: request: mp_malloc_sync 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: Heap on socket 0 was shrunk by 34MB 00:04:51.945 EAL: Trying to obtain current memory policy. 00:04:51.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.945 EAL: Restoring previous memory policy: 4 00:04:51.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.945 EAL: request: mp_malloc_sync 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: Heap on socket 0 was expanded by 66MB 00:04:51.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.945 EAL: request: mp_malloc_sync 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: Heap on socket 0 was shrunk by 66MB 00:04:51.945 EAL: Trying to obtain current memory policy. 00:04:51.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.945 EAL: Restoring previous memory policy: 4 00:04:51.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.945 EAL: request: mp_malloc_sync 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: Heap on socket 0 was expanded by 130MB 00:04:51.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.945 EAL: request: mp_malloc_sync 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: Heap on socket 0 was shrunk by 130MB 00:04:51.945 EAL: Trying to obtain current memory policy. 00:04:51.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.205 EAL: Restoring previous memory policy: 4 00:04:52.205 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.205 EAL: request: mp_malloc_sync 00:04:52.205 EAL: No shared files mode enabled, IPC is disabled 00:04:52.205 EAL: Heap on socket 0 was expanded by 258MB 00:04:52.205 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.205 EAL: request: mp_malloc_sync 00:04:52.205 EAL: No shared files mode enabled, IPC is disabled 00:04:52.205 EAL: Heap on socket 0 was shrunk by 258MB 00:04:52.205 EAL: Trying to obtain current memory policy. 00:04:52.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.205 EAL: Restoring previous memory policy: 4 00:04:52.205 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.205 EAL: request: mp_malloc_sync 00:04:52.205 EAL: No shared files mode enabled, IPC is disabled 00:04:52.205 EAL: Heap on socket 0 was expanded by 514MB 00:04:52.465 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.465 EAL: request: mp_malloc_sync 00:04:52.465 EAL: No shared files mode enabled, IPC is disabled 00:04:52.465 EAL: Heap on socket 0 was shrunk by 514MB 00:04:52.465 EAL: Trying to obtain current memory policy. 00:04:52.465 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.724 EAL: Restoring previous memory policy: 4 00:04:52.724 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.724 EAL: request: mp_malloc_sync 00:04:52.724 EAL: No shared files mode enabled, IPC is disabled 00:04:52.724 EAL: Heap on socket 0 was expanded by 1026MB 00:04:52.724 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.983 EAL: request: mp_malloc_sync 00:04:52.983 EAL: No shared files mode enabled, IPC is disabled 00:04:52.983 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:52.983 passed 00:04:52.983 00:04:52.983 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.983 suites 1 1 n/a 0 0 00:04:52.983 tests 2 2 2 0 0 00:04:52.983 asserts 497 497 497 0 n/a 00:04:52.983 00:04:52.983 Elapsed time = 0.960 seconds 00:04:52.983 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.983 EAL: request: mp_malloc_sync 00:04:52.983 EAL: No shared files mode enabled, IPC is disabled 00:04:52.983 EAL: Heap on socket 0 was shrunk by 2MB 00:04:52.983 EAL: No shared files mode enabled, IPC is disabled 00:04:52.983 EAL: No shared files mode enabled, IPC is disabled 00:04:52.983 EAL: No shared files mode enabled, IPC is disabled 00:04:52.983 00:04:52.983 real 0m1.078s 00:04:52.983 user 0m0.627s 00:04:52.983 sys 0m0.425s 00:04:52.983 10:42:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.983 10:42:41 -- common/autotest_common.sh@10 -- # set +x 00:04:52.983 ************************************ 00:04:52.983 END TEST env_vtophys 00:04:52.983 ************************************ 00:04:52.983 10:42:41 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:52.983 10:42:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.983 10:42:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.983 10:42:41 -- common/autotest_common.sh@10 -- # set +x 00:04:52.983 ************************************ 00:04:52.983 START TEST env_pci 00:04:52.983 ************************************ 00:04:52.983 10:42:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:52.983 00:04:52.983 00:04:52.983 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.983 http://cunit.sourceforge.net/ 00:04:52.983 00:04:52.983 00:04:52.983 Suite: pci 00:04:52.983 Test: pci_hook ...[2024-12-15 10:42:41.886626] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1281885 has claimed it 00:04:52.983 EAL: Cannot find device (10000:00:01.0) 00:04:52.983 EAL: Failed to attach device on primary process 00:04:52.983 passed 00:04:52.983 00:04:52.983 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.983 suites 1 1 n/a 0 0 00:04:52.983 tests 1 1 1 0 0 00:04:52.983 asserts 25 25 25 0 n/a 00:04:52.983 00:04:52.983 Elapsed time = 0.034 seconds 00:04:52.983 00:04:52.983 real 0m0.053s 00:04:52.983 user 0m0.012s 00:04:52.983 sys 0m0.040s 00:04:52.983 10:42:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.983 10:42:41 -- common/autotest_common.sh@10 -- # set +x 00:04:52.983 ************************************ 00:04:52.983 END TEST env_pci 00:04:52.983 ************************************ 00:04:52.983 10:42:41 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:52.983 10:42:41 -- env/env.sh@15 -- # uname 00:04:52.983 10:42:41 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:52.983 10:42:41 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:52.983 10:42:41 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:52.983 10:42:41 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:52.983 10:42:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.983 10:42:41 -- common/autotest_common.sh@10 -- # set +x 00:04:52.983 ************************************ 00:04:52.983 START TEST env_dpdk_post_init 00:04:52.983 ************************************ 00:04:52.983 10:42:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:52.983 EAL: Detected CPU lcores: 112 00:04:52.983 EAL: Detected NUMA nodes: 2 00:04:52.983 EAL: Detected static linkage of DPDK 00:04:53.243 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:53.243 EAL: Selected IOVA mode 'VA' 00:04:53.243 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.243 EAL: VFIO support initialized 00:04:53.243 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:53.243 EAL: Using IOMMU type 1 (Type 1) 00:04:54.180 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:57.466 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:57.466 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:04:57.725 Starting DPDK initialization... 00:04:57.725 Starting SPDK post initialization... 00:04:57.725 SPDK NVMe probe 00:04:57.725 Attaching to 0000:d8:00.0 00:04:57.725 Attached to 0000:d8:00.0 00:04:57.725 Cleaning up... 00:04:57.725 00:04:57.725 real 0m4.747s 00:04:57.725 user 0m3.595s 00:04:57.725 sys 0m0.393s 00:04:57.725 10:42:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.725 10:42:46 -- common/autotest_common.sh@10 -- # set +x 00:04:57.725 ************************************ 00:04:57.725 END TEST env_dpdk_post_init 00:04:57.725 ************************************ 00:04:57.984 10:42:46 -- env/env.sh@26 -- # uname 00:04:57.984 10:42:46 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:57.984 10:42:46 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:57.984 10:42:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.984 10:42:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.984 10:42:46 -- common/autotest_common.sh@10 -- # set +x 00:04:57.984 ************************************ 00:04:57.984 START TEST env_mem_callbacks 00:04:57.984 ************************************ 00:04:57.984 10:42:46 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:57.984 EAL: Detected CPU lcores: 112 00:04:57.984 EAL: Detected NUMA nodes: 2 00:04:57.984 EAL: Detected static linkage of DPDK 00:04:57.984 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:57.984 EAL: Selected IOVA mode 'VA' 00:04:57.984 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.984 EAL: VFIO support initialized 00:04:57.984 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:57.984 00:04:57.984 00:04:57.984 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.984 http://cunit.sourceforge.net/ 00:04:57.984 00:04:57.984 00:04:57.984 Suite: memory 00:04:57.984 Test: test ... 00:04:57.984 register 0x200000200000 2097152 00:04:57.984 malloc 3145728 00:04:57.984 register 0x200000400000 4194304 00:04:57.984 buf 0x200000500000 len 3145728 PASSED 00:04:57.984 malloc 64 00:04:57.984 buf 0x2000004fff40 len 64 PASSED 00:04:57.984 malloc 4194304 00:04:57.984 register 0x200000800000 6291456 00:04:57.984 buf 0x200000a00000 len 4194304 PASSED 00:04:57.984 free 0x200000500000 3145728 00:04:57.984 free 0x2000004fff40 64 00:04:57.984 unregister 0x200000400000 4194304 PASSED 00:04:57.984 free 0x200000a00000 4194304 00:04:57.984 unregister 0x200000800000 6291456 PASSED 00:04:57.984 malloc 8388608 00:04:57.984 register 0x200000400000 10485760 00:04:57.984 buf 0x200000600000 len 8388608 PASSED 00:04:57.984 free 0x200000600000 8388608 00:04:57.984 unregister 0x200000400000 10485760 PASSED 00:04:57.984 passed 00:04:57.984 00:04:57.984 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.984 suites 1 1 n/a 0 0 00:04:57.984 tests 1 1 1 0 0 00:04:57.984 asserts 15 15 15 0 n/a 00:04:57.984 00:04:57.984 Elapsed time = 0.005 seconds 00:04:57.984 00:04:57.984 real 0m0.061s 00:04:57.984 user 0m0.012s 00:04:57.984 sys 0m0.049s 00:04:57.984 10:42:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.984 10:42:46 -- common/autotest_common.sh@10 -- # set +x 00:04:57.984 ************************************ 00:04:57.984 END TEST env_mem_callbacks 00:04:57.984 ************************************ 00:04:57.984 00:04:57.984 real 0m6.422s 00:04:57.984 user 0m4.502s 00:04:57.984 sys 0m1.183s 00:04:57.984 10:42:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.984 10:42:46 -- common/autotest_common.sh@10 -- # set +x 00:04:57.984 ************************************ 00:04:57.984 END TEST env 00:04:57.984 ************************************ 00:04:57.984 10:42:46 -- spdk/autotest.sh@163 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:57.984 10:42:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.984 10:42:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.984 10:42:46 -- common/autotest_common.sh@10 -- # set +x 00:04:57.984 ************************************ 00:04:57.984 START TEST rpc 00:04:57.984 ************************************ 00:04:57.984 10:42:46 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:57.984 * Looking for test storage... 00:04:57.984 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:57.984 10:42:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:58.244 10:42:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:58.244 10:42:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:58.244 10:42:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:58.244 10:42:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:58.244 10:42:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:58.244 10:42:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:58.244 10:42:47 -- scripts/common.sh@335 -- # IFS=.-: 00:04:58.244 10:42:47 -- scripts/common.sh@335 -- # read -ra ver1 00:04:58.244 10:42:47 -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.244 10:42:47 -- scripts/common.sh@336 -- # read -ra ver2 00:04:58.244 10:42:47 -- scripts/common.sh@337 -- # local 'op=<' 00:04:58.244 10:42:47 -- scripts/common.sh@339 -- # ver1_l=2 00:04:58.244 10:42:47 -- scripts/common.sh@340 -- # ver2_l=1 00:04:58.244 10:42:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:58.244 10:42:47 -- scripts/common.sh@343 -- # case "$op" in 00:04:58.244 10:42:47 -- scripts/common.sh@344 -- # : 1 00:04:58.244 10:42:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:58.244 10:42:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.244 10:42:47 -- scripts/common.sh@364 -- # decimal 1 00:04:58.244 10:42:47 -- scripts/common.sh@352 -- # local d=1 00:04:58.244 10:42:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.244 10:42:47 -- scripts/common.sh@354 -- # echo 1 00:04:58.244 10:42:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:58.244 10:42:47 -- scripts/common.sh@365 -- # decimal 2 00:04:58.244 10:42:47 -- scripts/common.sh@352 -- # local d=2 00:04:58.244 10:42:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.244 10:42:47 -- scripts/common.sh@354 -- # echo 2 00:04:58.244 10:42:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:58.244 10:42:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:58.244 10:42:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:58.244 10:42:47 -- scripts/common.sh@367 -- # return 0 00:04:58.244 10:42:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.244 10:42:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:58.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.244 --rc genhtml_branch_coverage=1 00:04:58.244 --rc genhtml_function_coverage=1 00:04:58.244 --rc genhtml_legend=1 00:04:58.244 --rc geninfo_all_blocks=1 00:04:58.244 --rc geninfo_unexecuted_blocks=1 00:04:58.244 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.244 ' 00:04:58.244 10:42:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:58.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.244 --rc genhtml_branch_coverage=1 00:04:58.244 --rc genhtml_function_coverage=1 00:04:58.244 --rc genhtml_legend=1 00:04:58.244 --rc geninfo_all_blocks=1 00:04:58.244 --rc geninfo_unexecuted_blocks=1 00:04:58.244 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.244 ' 00:04:58.244 10:42:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:58.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.244 --rc genhtml_branch_coverage=1 00:04:58.244 --rc genhtml_function_coverage=1 00:04:58.244 --rc genhtml_legend=1 00:04:58.244 --rc geninfo_all_blocks=1 00:04:58.244 --rc geninfo_unexecuted_blocks=1 00:04:58.244 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.244 ' 00:04:58.244 10:42:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:58.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.244 --rc genhtml_branch_coverage=1 00:04:58.244 --rc genhtml_function_coverage=1 00:04:58.244 --rc genhtml_legend=1 00:04:58.244 --rc geninfo_all_blocks=1 00:04:58.244 --rc geninfo_unexecuted_blocks=1 00:04:58.244 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.245 ' 00:04:58.245 10:42:47 -- rpc/rpc.sh@65 -- # spdk_pid=1282908 00:04:58.245 10:42:47 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:58.245 10:42:47 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.245 10:42:47 -- rpc/rpc.sh@67 -- # waitforlisten 1282908 00:04:58.245 10:42:47 -- common/autotest_common.sh@829 -- # '[' -z 1282908 ']' 00:04:58.245 10:42:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.245 10:42:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.245 10:42:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.245 10:42:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.245 10:42:47 -- common/autotest_common.sh@10 -- # set +x 00:04:58.245 [2024-12-15 10:42:47.107709] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:58.245 [2024-12-15 10:42:47.107775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282908 ] 00:04:58.245 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.245 [2024-12-15 10:42:47.172249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.245 [2024-12-15 10:42:47.245311] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:58.245 [2024-12-15 10:42:47.245430] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:58.245 [2024-12-15 10:42:47.245441] app.c: 492:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1282908' to capture a snapshot of events at runtime. 00:04:58.245 [2024-12-15 10:42:47.245450] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1282908 for offline analysis/debug. 00:04:58.245 [2024-12-15 10:42:47.245469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.182 10:42:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.182 10:42:47 -- common/autotest_common.sh@862 -- # return 0 00:04:59.182 10:42:47 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:59.182 10:42:47 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:59.182 10:42:47 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:59.182 10:42:47 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:59.182 10:42:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.182 10:42:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.182 10:42:47 -- common/autotest_common.sh@10 -- # set +x 00:04:59.182 ************************************ 00:04:59.182 START TEST rpc_integrity 00:04:59.182 ************************************ 00:04:59.182 10:42:47 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:59.182 10:42:47 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.182 10:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.182 10:42:47 -- common/autotest_common.sh@10 -- # set +x 00:04:59.182 10:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.182 10:42:47 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.182 10:42:47 -- rpc/rpc.sh@13 -- # jq length 00:04:59.182 10:42:47 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:59.182 10:42:48 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.182 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.182 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.182 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.182 10:42:48 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:59.182 10:42:48 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:59.182 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.182 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.182 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.182 10:42:48 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:59.182 { 00:04:59.182 "name": "Malloc0", 00:04:59.182 "aliases": [ 00:04:59.182 "21869428-3a91-47ed-9f2b-2a972a254f49" 00:04:59.182 ], 00:04:59.182 "product_name": "Malloc disk", 00:04:59.182 "block_size": 512, 00:04:59.182 "num_blocks": 16384, 00:04:59.182 "uuid": "21869428-3a91-47ed-9f2b-2a972a254f49", 00:04:59.182 "assigned_rate_limits": { 00:04:59.182 "rw_ios_per_sec": 0, 00:04:59.182 "rw_mbytes_per_sec": 0, 00:04:59.182 "r_mbytes_per_sec": 0, 00:04:59.182 "w_mbytes_per_sec": 0 00:04:59.182 }, 00:04:59.182 "claimed": false, 00:04:59.182 "zoned": false, 00:04:59.182 "supported_io_types": { 00:04:59.182 "read": true, 00:04:59.182 "write": true, 00:04:59.182 "unmap": true, 00:04:59.182 "write_zeroes": true, 00:04:59.182 "flush": true, 00:04:59.182 "reset": true, 00:04:59.182 "compare": false, 00:04:59.182 "compare_and_write": false, 00:04:59.182 "abort": true, 00:04:59.182 "nvme_admin": false, 00:04:59.182 "nvme_io": false 00:04:59.182 }, 00:04:59.182 "memory_domains": [ 00:04:59.182 { 00:04:59.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.182 "dma_device_type": 2 00:04:59.182 } 00:04:59.182 ], 00:04:59.182 "driver_specific": {} 00:04:59.182 } 00:04:59.182 ]' 00:04:59.182 10:42:48 -- rpc/rpc.sh@17 -- # jq length 00:04:59.182 10:42:48 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.182 10:42:48 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:59.182 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.182 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.182 [2024-12-15 10:42:48.080698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:59.182 [2024-12-15 10:42:48.080736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.182 [2024-12-15 10:42:48.080758] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4e6d030 00:04:59.182 [2024-12-15 10:42:48.080769] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.182 [2024-12-15 10:42:48.081603] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.182 [2024-12-15 10:42:48.081627] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.182 Passthru0 00:04:59.182 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.182 10:42:48 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.182 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.182 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.182 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.182 10:42:48 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.182 { 00:04:59.182 "name": "Malloc0", 00:04:59.182 "aliases": [ 00:04:59.182 "21869428-3a91-47ed-9f2b-2a972a254f49" 00:04:59.182 ], 00:04:59.182 "product_name": "Malloc disk", 00:04:59.182 "block_size": 512, 00:04:59.182 "num_blocks": 16384, 00:04:59.182 "uuid": "21869428-3a91-47ed-9f2b-2a972a254f49", 00:04:59.182 "assigned_rate_limits": { 00:04:59.182 "rw_ios_per_sec": 0, 00:04:59.182 "rw_mbytes_per_sec": 0, 00:04:59.182 "r_mbytes_per_sec": 0, 00:04:59.182 "w_mbytes_per_sec": 0 00:04:59.182 }, 00:04:59.182 "claimed": true, 00:04:59.182 "claim_type": "exclusive_write", 00:04:59.182 "zoned": false, 00:04:59.182 "supported_io_types": { 00:04:59.183 "read": true, 00:04:59.183 "write": true, 00:04:59.183 "unmap": true, 00:04:59.183 "write_zeroes": true, 00:04:59.183 "flush": true, 00:04:59.183 "reset": true, 00:04:59.183 "compare": false, 00:04:59.183 "compare_and_write": false, 00:04:59.183 "abort": true, 00:04:59.183 "nvme_admin": false, 00:04:59.183 "nvme_io": false 00:04:59.183 }, 00:04:59.183 "memory_domains": [ 00:04:59.183 { 00:04:59.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.183 "dma_device_type": 2 00:04:59.183 } 00:04:59.183 ], 00:04:59.183 "driver_specific": {} 00:04:59.183 }, 00:04:59.183 { 00:04:59.183 "name": "Passthru0", 00:04:59.183 "aliases": [ 00:04:59.183 "420439a7-ee4e-5b48-9818-d6e49930e9fa" 00:04:59.183 ], 00:04:59.183 "product_name": "passthru", 00:04:59.183 "block_size": 512, 00:04:59.183 "num_blocks": 16384, 00:04:59.183 "uuid": "420439a7-ee4e-5b48-9818-d6e49930e9fa", 00:04:59.183 "assigned_rate_limits": { 00:04:59.183 "rw_ios_per_sec": 0, 00:04:59.183 "rw_mbytes_per_sec": 0, 00:04:59.183 "r_mbytes_per_sec": 0, 00:04:59.183 "w_mbytes_per_sec": 0 00:04:59.183 }, 00:04:59.183 "claimed": false, 00:04:59.183 "zoned": false, 00:04:59.183 "supported_io_types": { 00:04:59.183 "read": true, 00:04:59.183 "write": true, 00:04:59.183 "unmap": true, 00:04:59.183 "write_zeroes": true, 00:04:59.183 "flush": true, 00:04:59.183 "reset": true, 00:04:59.183 "compare": false, 00:04:59.183 "compare_and_write": false, 00:04:59.183 "abort": true, 00:04:59.183 "nvme_admin": false, 00:04:59.183 "nvme_io": false 00:04:59.183 }, 00:04:59.183 "memory_domains": [ 00:04:59.183 { 00:04:59.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.183 "dma_device_type": 2 00:04:59.183 } 00:04:59.183 ], 00:04:59.183 "driver_specific": { 00:04:59.183 "passthru": { 00:04:59.183 "name": "Passthru0", 00:04:59.183 "base_bdev_name": "Malloc0" 00:04:59.183 } 00:04:59.183 } 00:04:59.183 } 00:04:59.183 ]' 00:04:59.183 10:42:48 -- rpc/rpc.sh@21 -- # jq length 00:04:59.183 10:42:48 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.183 10:42:48 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.183 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.183 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.183 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.183 10:42:48 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:59.183 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.183 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.183 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.183 10:42:48 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:59.183 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.183 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.183 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.183 10:42:48 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:59.183 10:42:48 -- rpc/rpc.sh@26 -- # jq length 00:04:59.442 10:42:48 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.442 00:04:59.442 real 0m0.281s 00:04:59.442 user 0m0.172s 00:04:59.442 sys 0m0.052s 00:04:59.442 10:42:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.442 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.442 ************************************ 00:04:59.442 END TEST rpc_integrity 00:04:59.442 ************************************ 00:04:59.442 10:42:48 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:59.442 10:42:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.442 10:42:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.442 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.442 ************************************ 00:04:59.442 START TEST rpc_plugins 00:04:59.442 ************************************ 00:04:59.442 10:42:48 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:59.442 10:42:48 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:59.442 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.442 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.442 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.442 10:42:48 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:59.442 10:42:48 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:59.442 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.442 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.442 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.442 10:42:48 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:59.442 { 00:04:59.442 "name": "Malloc1", 00:04:59.442 "aliases": [ 00:04:59.442 "bb8b9bd1-b084-492c-823d-51e90f7d2406" 00:04:59.442 ], 00:04:59.442 "product_name": "Malloc disk", 00:04:59.442 "block_size": 4096, 00:04:59.442 "num_blocks": 256, 00:04:59.442 "uuid": "bb8b9bd1-b084-492c-823d-51e90f7d2406", 00:04:59.442 "assigned_rate_limits": { 00:04:59.442 "rw_ios_per_sec": 0, 00:04:59.442 "rw_mbytes_per_sec": 0, 00:04:59.442 "r_mbytes_per_sec": 0, 00:04:59.442 "w_mbytes_per_sec": 0 00:04:59.442 }, 00:04:59.442 "claimed": false, 00:04:59.442 "zoned": false, 00:04:59.442 "supported_io_types": { 00:04:59.442 "read": true, 00:04:59.442 "write": true, 00:04:59.442 "unmap": true, 00:04:59.442 "write_zeroes": true, 00:04:59.442 "flush": true, 00:04:59.442 "reset": true, 00:04:59.442 "compare": false, 00:04:59.442 "compare_and_write": false, 00:04:59.442 "abort": true, 00:04:59.442 "nvme_admin": false, 00:04:59.442 "nvme_io": false 00:04:59.442 }, 00:04:59.442 "memory_domains": [ 00:04:59.442 { 00:04:59.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.442 "dma_device_type": 2 00:04:59.442 } 00:04:59.442 ], 00:04:59.442 "driver_specific": {} 00:04:59.442 } 00:04:59.442 ]' 00:04:59.442 10:42:48 -- rpc/rpc.sh@32 -- # jq length 00:04:59.442 10:42:48 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:59.442 10:42:48 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:59.442 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.442 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.442 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.442 10:42:48 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:59.442 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.442 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.442 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.442 10:42:48 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:59.442 10:42:48 -- rpc/rpc.sh@36 -- # jq length 00:04:59.442 10:42:48 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:59.442 00:04:59.442 real 0m0.142s 00:04:59.442 user 0m0.078s 00:04:59.442 sys 0m0.026s 00:04:59.442 10:42:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.442 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.442 ************************************ 00:04:59.442 END TEST rpc_plugins 00:04:59.442 ************************************ 00:04:59.442 10:42:48 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:59.442 10:42:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.442 10:42:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.442 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.701 ************************************ 00:04:59.701 START TEST rpc_trace_cmd_test 00:04:59.701 ************************************ 00:04:59.701 10:42:48 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:59.701 10:42:48 -- rpc/rpc.sh@40 -- # local info 00:04:59.701 10:42:48 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:59.701 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.701 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.701 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.701 10:42:48 -- rpc/rpc.sh@42 -- # info='{ 00:04:59.701 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1282908", 00:04:59.701 "tpoint_group_mask": "0x8", 00:04:59.701 "iscsi_conn": { 00:04:59.701 "mask": "0x2", 00:04:59.701 "tpoint_mask": "0x0" 00:04:59.701 }, 00:04:59.701 "scsi": { 00:04:59.701 "mask": "0x4", 00:04:59.701 "tpoint_mask": "0x0" 00:04:59.701 }, 00:04:59.701 "bdev": { 00:04:59.701 "mask": "0x8", 00:04:59.701 "tpoint_mask": "0xffffffffffffffff" 00:04:59.701 }, 00:04:59.701 "nvmf_rdma": { 00:04:59.701 "mask": "0x10", 00:04:59.701 "tpoint_mask": "0x0" 00:04:59.701 }, 00:04:59.701 "nvmf_tcp": { 00:04:59.701 "mask": "0x20", 00:04:59.701 "tpoint_mask": "0x0" 00:04:59.701 }, 00:04:59.701 "ftl": { 00:04:59.701 "mask": "0x40", 00:04:59.701 "tpoint_mask": "0x0" 00:04:59.701 }, 00:04:59.701 "blobfs": { 00:04:59.701 "mask": "0x80", 00:04:59.701 "tpoint_mask": "0x0" 00:04:59.701 }, 00:04:59.701 "dsa": { 00:04:59.701 "mask": "0x200", 00:04:59.701 "tpoint_mask": "0x0" 00:04:59.701 }, 00:04:59.701 "thread": { 00:04:59.701 "mask": "0x400", 00:04:59.701 "tpoint_mask": "0x0" 00:04:59.701 }, 00:04:59.701 "nvme_pcie": { 00:04:59.701 "mask": "0x800", 00:04:59.701 "tpoint_mask": "0x0" 00:04:59.701 }, 00:04:59.701 "iaa": { 00:04:59.701 "mask": "0x1000", 00:04:59.701 "tpoint_mask": "0x0" 00:04:59.701 }, 00:04:59.701 "nvme_tcp": { 00:04:59.701 "mask": "0x2000", 00:04:59.701 "tpoint_mask": "0x0" 00:04:59.701 }, 00:04:59.701 "bdev_nvme": { 00:04:59.701 "mask": "0x4000", 00:04:59.701 "tpoint_mask": "0x0" 00:04:59.701 } 00:04:59.701 }' 00:04:59.701 10:42:48 -- rpc/rpc.sh@43 -- # jq length 00:04:59.701 10:42:48 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:59.701 10:42:48 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:59.701 10:42:48 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:59.701 10:42:48 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:59.701 10:42:48 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:59.701 10:42:48 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:59.701 10:42:48 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:59.701 10:42:48 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:59.701 10:42:48 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:59.701 00:04:59.701 real 0m0.212s 00:04:59.701 user 0m0.173s 00:04:59.701 sys 0m0.032s 00:04:59.701 10:42:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.701 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.702 ************************************ 00:04:59.702 END TEST rpc_trace_cmd_test 00:04:59.702 ************************************ 00:04:59.960 10:42:48 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:59.960 10:42:48 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:59.960 10:42:48 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:59.960 10:42:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.960 10:42:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.960 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.960 ************************************ 00:04:59.960 START TEST rpc_daemon_integrity 00:04:59.961 ************************************ 00:04:59.961 10:42:48 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:59.961 10:42:48 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.961 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.961 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.961 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.961 10:42:48 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.961 10:42:48 -- rpc/rpc.sh@13 -- # jq length 00:04:59.961 10:42:48 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:59.961 10:42:48 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.961 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.961 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.961 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.961 10:42:48 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:59.961 10:42:48 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:59.961 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.961 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.961 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.961 10:42:48 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:59.961 { 00:04:59.961 "name": "Malloc2", 00:04:59.961 "aliases": [ 00:04:59.961 "41daf051-36ab-4b04-9e7f-0789c3b7eaca" 00:04:59.961 ], 00:04:59.961 "product_name": "Malloc disk", 00:04:59.961 "block_size": 512, 00:04:59.961 "num_blocks": 16384, 00:04:59.961 "uuid": "41daf051-36ab-4b04-9e7f-0789c3b7eaca", 00:04:59.961 "assigned_rate_limits": { 00:04:59.961 "rw_ios_per_sec": 0, 00:04:59.961 "rw_mbytes_per_sec": 0, 00:04:59.961 "r_mbytes_per_sec": 0, 00:04:59.961 "w_mbytes_per_sec": 0 00:04:59.961 }, 00:04:59.961 "claimed": false, 00:04:59.961 "zoned": false, 00:04:59.961 "supported_io_types": { 00:04:59.961 "read": true, 00:04:59.961 "write": true, 00:04:59.961 "unmap": true, 00:04:59.961 "write_zeroes": true, 00:04:59.961 "flush": true, 00:04:59.961 "reset": true, 00:04:59.961 "compare": false, 00:04:59.961 "compare_and_write": false, 00:04:59.961 "abort": true, 00:04:59.961 "nvme_admin": false, 00:04:59.961 "nvme_io": false 00:04:59.961 }, 00:04:59.961 "memory_domains": [ 00:04:59.961 { 00:04:59.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.961 "dma_device_type": 2 00:04:59.961 } 00:04:59.961 ], 00:04:59.961 "driver_specific": {} 00:04:59.961 } 00:04:59.961 ]' 00:04:59.961 10:42:48 -- rpc/rpc.sh@17 -- # jq length 00:04:59.961 10:42:48 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.961 10:42:48 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:59.961 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.961 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.961 [2024-12-15 10:42:48.850705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:59.961 [2024-12-15 10:42:48.850738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.961 [2024-12-15 10:42:48.850755] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4ff6980 00:04:59.961 [2024-12-15 10:42:48.850765] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.961 [2024-12-15 10:42:48.851473] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.961 [2024-12-15 10:42:48.851495] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.961 Passthru0 00:04:59.961 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.961 10:42:48 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.961 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.961 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.961 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.961 10:42:48 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.961 { 00:04:59.961 "name": "Malloc2", 00:04:59.961 "aliases": [ 00:04:59.961 "41daf051-36ab-4b04-9e7f-0789c3b7eaca" 00:04:59.961 ], 00:04:59.961 "product_name": "Malloc disk", 00:04:59.961 "block_size": 512, 00:04:59.961 "num_blocks": 16384, 00:04:59.961 "uuid": "41daf051-36ab-4b04-9e7f-0789c3b7eaca", 00:04:59.961 "assigned_rate_limits": { 00:04:59.961 "rw_ios_per_sec": 0, 00:04:59.961 "rw_mbytes_per_sec": 0, 00:04:59.961 "r_mbytes_per_sec": 0, 00:04:59.961 "w_mbytes_per_sec": 0 00:04:59.961 }, 00:04:59.961 "claimed": true, 00:04:59.961 "claim_type": "exclusive_write", 00:04:59.961 "zoned": false, 00:04:59.961 "supported_io_types": { 00:04:59.961 "read": true, 00:04:59.961 "write": true, 00:04:59.961 "unmap": true, 00:04:59.961 "write_zeroes": true, 00:04:59.961 "flush": true, 00:04:59.961 "reset": true, 00:04:59.961 "compare": false, 00:04:59.961 "compare_and_write": false, 00:04:59.961 "abort": true, 00:04:59.961 "nvme_admin": false, 00:04:59.961 "nvme_io": false 00:04:59.961 }, 00:04:59.961 "memory_domains": [ 00:04:59.961 { 00:04:59.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.961 "dma_device_type": 2 00:04:59.961 } 00:04:59.961 ], 00:04:59.961 "driver_specific": {} 00:04:59.961 }, 00:04:59.961 { 00:04:59.961 "name": "Passthru0", 00:04:59.961 "aliases": [ 00:04:59.961 "9007f5b3-12c5-5baf-b770-62a5d03a5515" 00:04:59.961 ], 00:04:59.961 "product_name": "passthru", 00:04:59.961 "block_size": 512, 00:04:59.961 "num_blocks": 16384, 00:04:59.961 "uuid": "9007f5b3-12c5-5baf-b770-62a5d03a5515", 00:04:59.961 "assigned_rate_limits": { 00:04:59.961 "rw_ios_per_sec": 0, 00:04:59.961 "rw_mbytes_per_sec": 0, 00:04:59.961 "r_mbytes_per_sec": 0, 00:04:59.961 "w_mbytes_per_sec": 0 00:04:59.961 }, 00:04:59.961 "claimed": false, 00:04:59.961 "zoned": false, 00:04:59.961 "supported_io_types": { 00:04:59.961 "read": true, 00:04:59.961 "write": true, 00:04:59.961 "unmap": true, 00:04:59.961 "write_zeroes": true, 00:04:59.961 "flush": true, 00:04:59.961 "reset": true, 00:04:59.961 "compare": false, 00:04:59.961 "compare_and_write": false, 00:04:59.961 "abort": true, 00:04:59.961 "nvme_admin": false, 00:04:59.961 "nvme_io": false 00:04:59.961 }, 00:04:59.961 "memory_domains": [ 00:04:59.961 { 00:04:59.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.961 "dma_device_type": 2 00:04:59.961 } 00:04:59.961 ], 00:04:59.961 "driver_specific": { 00:04:59.961 "passthru": { 00:04:59.961 "name": "Passthru0", 00:04:59.961 "base_bdev_name": "Malloc2" 00:04:59.961 } 00:04:59.961 } 00:04:59.961 } 00:04:59.961 ]' 00:04:59.961 10:42:48 -- rpc/rpc.sh@21 -- # jq length 00:04:59.961 10:42:48 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.961 10:42:48 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.961 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.961 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.961 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.961 10:42:48 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:59.961 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.961 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.961 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.961 10:42:48 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:59.961 10:42:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.961 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.961 10:42:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.961 10:42:48 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:59.961 10:42:48 -- rpc/rpc.sh@26 -- # jq length 00:05:00.220 10:42:48 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:00.220 00:05:00.220 real 0m0.274s 00:05:00.220 user 0m0.161s 00:05:00.220 sys 0m0.053s 00:05:00.220 10:42:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.220 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:05:00.220 ************************************ 00:05:00.220 END TEST rpc_daemon_integrity 00:05:00.220 ************************************ 00:05:00.220 10:42:49 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:00.220 10:42:49 -- rpc/rpc.sh@84 -- # killprocess 1282908 00:05:00.220 10:42:49 -- common/autotest_common.sh@936 -- # '[' -z 1282908 ']' 00:05:00.220 10:42:49 -- common/autotest_common.sh@940 -- # kill -0 1282908 00:05:00.220 10:42:49 -- common/autotest_common.sh@941 -- # uname 00:05:00.220 10:42:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:00.220 10:42:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1282908 00:05:00.220 10:42:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:00.220 10:42:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:00.220 10:42:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1282908' 00:05:00.220 killing process with pid 1282908 00:05:00.220 10:42:49 -- common/autotest_common.sh@955 -- # kill 1282908 00:05:00.220 10:42:49 -- common/autotest_common.sh@960 -- # wait 1282908 00:05:00.478 00:05:00.478 real 0m2.494s 00:05:00.478 user 0m3.111s 00:05:00.478 sys 0m0.759s 00:05:00.478 10:42:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.478 10:42:49 -- common/autotest_common.sh@10 -- # set +x 00:05:00.478 ************************************ 00:05:00.478 END TEST rpc 00:05:00.478 ************************************ 00:05:00.478 10:42:49 -- spdk/autotest.sh@164 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:00.478 10:42:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.478 10:42:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.478 10:42:49 -- common/autotest_common.sh@10 -- # set +x 00:05:00.478 ************************************ 00:05:00.478 START TEST rpc_client 00:05:00.478 ************************************ 00:05:00.478 10:42:49 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:00.737 * Looking for test storage... 00:05:00.737 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:00.737 10:42:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:00.737 10:42:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:00.737 10:42:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:00.737 10:42:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:00.737 10:42:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:00.737 10:42:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:00.737 10:42:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:00.737 10:42:49 -- scripts/common.sh@335 -- # IFS=.-: 00:05:00.737 10:42:49 -- scripts/common.sh@335 -- # read -ra ver1 00:05:00.737 10:42:49 -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.738 10:42:49 -- scripts/common.sh@336 -- # read -ra ver2 00:05:00.738 10:42:49 -- scripts/common.sh@337 -- # local 'op=<' 00:05:00.738 10:42:49 -- scripts/common.sh@339 -- # ver1_l=2 00:05:00.738 10:42:49 -- scripts/common.sh@340 -- # ver2_l=1 00:05:00.738 10:42:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:00.738 10:42:49 -- scripts/common.sh@343 -- # case "$op" in 00:05:00.738 10:42:49 -- scripts/common.sh@344 -- # : 1 00:05:00.738 10:42:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:00.738 10:42:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.738 10:42:49 -- scripts/common.sh@364 -- # decimal 1 00:05:00.738 10:42:49 -- scripts/common.sh@352 -- # local d=1 00:05:00.738 10:42:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.738 10:42:49 -- scripts/common.sh@354 -- # echo 1 00:05:00.738 10:42:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:00.738 10:42:49 -- scripts/common.sh@365 -- # decimal 2 00:05:00.738 10:42:49 -- scripts/common.sh@352 -- # local d=2 00:05:00.738 10:42:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.738 10:42:49 -- scripts/common.sh@354 -- # echo 2 00:05:00.738 10:42:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:00.738 10:42:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:00.738 10:42:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:00.738 10:42:49 -- scripts/common.sh@367 -- # return 0 00:05:00.738 10:42:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.738 10:42:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:00.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.738 --rc genhtml_branch_coverage=1 00:05:00.738 --rc genhtml_function_coverage=1 00:05:00.738 --rc genhtml_legend=1 00:05:00.738 --rc geninfo_all_blocks=1 00:05:00.738 --rc geninfo_unexecuted_blocks=1 00:05:00.738 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:00.738 ' 00:05:00.738 10:42:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:00.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.738 --rc genhtml_branch_coverage=1 00:05:00.738 --rc genhtml_function_coverage=1 00:05:00.738 --rc genhtml_legend=1 00:05:00.738 --rc geninfo_all_blocks=1 00:05:00.738 --rc geninfo_unexecuted_blocks=1 00:05:00.738 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:00.738 ' 00:05:00.738 10:42:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:00.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.738 --rc genhtml_branch_coverage=1 00:05:00.738 --rc genhtml_function_coverage=1 00:05:00.738 --rc genhtml_legend=1 00:05:00.738 --rc geninfo_all_blocks=1 00:05:00.738 --rc geninfo_unexecuted_blocks=1 00:05:00.738 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:00.738 ' 00:05:00.738 10:42:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:00.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.738 --rc genhtml_branch_coverage=1 00:05:00.738 --rc genhtml_function_coverage=1 00:05:00.738 --rc genhtml_legend=1 00:05:00.738 --rc geninfo_all_blocks=1 00:05:00.738 --rc geninfo_unexecuted_blocks=1 00:05:00.738 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:00.738 ' 00:05:00.738 10:42:49 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:00.738 OK 00:05:00.738 10:42:49 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:00.738 00:05:00.738 real 0m0.176s 00:05:00.738 user 0m0.095s 00:05:00.738 sys 0m0.085s 00:05:00.738 10:42:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.738 10:42:49 -- common/autotest_common.sh@10 -- # set +x 00:05:00.738 ************************************ 00:05:00.738 END TEST rpc_client 00:05:00.738 ************************************ 00:05:00.738 10:42:49 -- spdk/autotest.sh@165 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:00.738 10:42:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.738 10:42:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.738 10:42:49 -- common/autotest_common.sh@10 -- # set +x 00:05:00.738 ************************************ 00:05:00.738 START TEST json_config 00:05:00.738 ************************************ 00:05:00.738 10:42:49 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:00.738 10:42:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:00.738 10:42:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:00.738 10:42:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:00.997 10:42:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:00.997 10:42:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:00.997 10:42:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:00.997 10:42:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:00.997 10:42:49 -- scripts/common.sh@335 -- # IFS=.-: 00:05:00.997 10:42:49 -- scripts/common.sh@335 -- # read -ra ver1 00:05:00.997 10:42:49 -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.997 10:42:49 -- scripts/common.sh@336 -- # read -ra ver2 00:05:00.997 10:42:49 -- scripts/common.sh@337 -- # local 'op=<' 00:05:00.997 10:42:49 -- scripts/common.sh@339 -- # ver1_l=2 00:05:00.997 10:42:49 -- scripts/common.sh@340 -- # ver2_l=1 00:05:00.997 10:42:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:00.997 10:42:49 -- scripts/common.sh@343 -- # case "$op" in 00:05:00.997 10:42:49 -- scripts/common.sh@344 -- # : 1 00:05:00.997 10:42:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:00.997 10:42:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.997 10:42:49 -- scripts/common.sh@364 -- # decimal 1 00:05:00.997 10:42:49 -- scripts/common.sh@352 -- # local d=1 00:05:00.997 10:42:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.997 10:42:49 -- scripts/common.sh@354 -- # echo 1 00:05:00.997 10:42:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:00.997 10:42:49 -- scripts/common.sh@365 -- # decimal 2 00:05:00.997 10:42:49 -- scripts/common.sh@352 -- # local d=2 00:05:00.997 10:42:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.997 10:42:49 -- scripts/common.sh@354 -- # echo 2 00:05:00.997 10:42:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:00.997 10:42:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:00.997 10:42:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:00.997 10:42:49 -- scripts/common.sh@367 -- # return 0 00:05:00.997 10:42:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.997 10:42:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:00.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.997 --rc genhtml_branch_coverage=1 00:05:00.997 --rc genhtml_function_coverage=1 00:05:00.997 --rc genhtml_legend=1 00:05:00.997 --rc geninfo_all_blocks=1 00:05:00.997 --rc geninfo_unexecuted_blocks=1 00:05:00.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:00.997 ' 00:05:00.997 10:42:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:00.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.997 --rc genhtml_branch_coverage=1 00:05:00.997 --rc genhtml_function_coverage=1 00:05:00.997 --rc genhtml_legend=1 00:05:00.997 --rc geninfo_all_blocks=1 00:05:00.997 --rc geninfo_unexecuted_blocks=1 00:05:00.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:00.997 ' 00:05:00.997 10:42:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:00.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.997 --rc genhtml_branch_coverage=1 00:05:00.997 --rc genhtml_function_coverage=1 00:05:00.997 --rc genhtml_legend=1 00:05:00.997 --rc geninfo_all_blocks=1 00:05:00.997 --rc geninfo_unexecuted_blocks=1 00:05:00.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:00.997 ' 00:05:00.997 10:42:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:00.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.997 --rc genhtml_branch_coverage=1 00:05:00.997 --rc genhtml_function_coverage=1 00:05:00.997 --rc genhtml_legend=1 00:05:00.997 --rc geninfo_all_blocks=1 00:05:00.997 --rc geninfo_unexecuted_blocks=1 00:05:00.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:00.997 ' 00:05:00.997 10:42:49 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:00.997 10:42:49 -- nvmf/common.sh@7 -- # uname -s 00:05:00.997 10:42:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.997 10:42:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.997 10:42:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.997 10:42:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.997 10:42:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:00.997 10:42:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:00.997 10:42:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.997 10:42:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:00.997 10:42:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.997 10:42:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:00.997 10:42:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:00.997 10:42:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:00.997 10:42:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.997 10:42:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:00.997 10:42:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:00.997 10:42:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:00.997 10:42:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.997 10:42:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.997 10:42:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.997 10:42:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.997 10:42:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.998 10:42:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.998 10:42:49 -- paths/export.sh@5 -- # export PATH 00:05:00.998 10:42:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.998 10:42:49 -- nvmf/common.sh@46 -- # : 0 00:05:00.998 10:42:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:00.998 10:42:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:00.998 10:42:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:00.998 10:42:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.998 10:42:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.998 10:42:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:00.998 10:42:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:00.998 10:42:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:00.998 10:42:49 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:00.998 10:42:49 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:00.998 10:42:49 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:00.998 10:42:49 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:00.998 10:42:49 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:00.998 WARNING: No tests are enabled so not running JSON configuration tests 00:05:00.998 10:42:49 -- json_config/json_config.sh@27 -- # exit 0 00:05:00.998 00:05:00.998 real 0m0.174s 00:05:00.998 user 0m0.105s 00:05:00.998 sys 0m0.078s 00:05:00.998 10:42:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.998 10:42:49 -- common/autotest_common.sh@10 -- # set +x 00:05:00.998 ************************************ 00:05:00.998 END TEST json_config 00:05:00.998 ************************************ 00:05:00.998 10:42:49 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:00.998 10:42:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.998 10:42:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.998 10:42:49 -- common/autotest_common.sh@10 -- # set +x 00:05:00.998 ************************************ 00:05:00.998 START TEST json_config_extra_key 00:05:00.998 ************************************ 00:05:00.998 10:42:49 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:00.998 10:42:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:00.998 10:42:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:00.998 10:42:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:01.258 10:42:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:01.258 10:42:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:01.258 10:42:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:01.258 10:42:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:01.258 10:42:50 -- scripts/common.sh@335 -- # IFS=.-: 00:05:01.258 10:42:50 -- scripts/common.sh@335 -- # read -ra ver1 00:05:01.258 10:42:50 -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.258 10:42:50 -- scripts/common.sh@336 -- # read -ra ver2 00:05:01.258 10:42:50 -- scripts/common.sh@337 -- # local 'op=<' 00:05:01.258 10:42:50 -- scripts/common.sh@339 -- # ver1_l=2 00:05:01.258 10:42:50 -- scripts/common.sh@340 -- # ver2_l=1 00:05:01.258 10:42:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:01.258 10:42:50 -- scripts/common.sh@343 -- # case "$op" in 00:05:01.258 10:42:50 -- scripts/common.sh@344 -- # : 1 00:05:01.258 10:42:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:01.258 10:42:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.258 10:42:50 -- scripts/common.sh@364 -- # decimal 1 00:05:01.258 10:42:50 -- scripts/common.sh@352 -- # local d=1 00:05:01.258 10:42:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.258 10:42:50 -- scripts/common.sh@354 -- # echo 1 00:05:01.258 10:42:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:01.258 10:42:50 -- scripts/common.sh@365 -- # decimal 2 00:05:01.258 10:42:50 -- scripts/common.sh@352 -- # local d=2 00:05:01.258 10:42:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.258 10:42:50 -- scripts/common.sh@354 -- # echo 2 00:05:01.258 10:42:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:01.258 10:42:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:01.258 10:42:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:01.258 10:42:50 -- scripts/common.sh@367 -- # return 0 00:05:01.258 10:42:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.258 10:42:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:01.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.258 --rc genhtml_branch_coverage=1 00:05:01.258 --rc genhtml_function_coverage=1 00:05:01.258 --rc genhtml_legend=1 00:05:01.258 --rc geninfo_all_blocks=1 00:05:01.258 --rc geninfo_unexecuted_blocks=1 00:05:01.258 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:01.258 ' 00:05:01.258 10:42:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:01.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.258 --rc genhtml_branch_coverage=1 00:05:01.258 --rc genhtml_function_coverage=1 00:05:01.258 --rc genhtml_legend=1 00:05:01.258 --rc geninfo_all_blocks=1 00:05:01.258 --rc geninfo_unexecuted_blocks=1 00:05:01.258 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:01.258 ' 00:05:01.258 10:42:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:01.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.258 --rc genhtml_branch_coverage=1 00:05:01.258 --rc genhtml_function_coverage=1 00:05:01.258 --rc genhtml_legend=1 00:05:01.258 --rc geninfo_all_blocks=1 00:05:01.258 --rc geninfo_unexecuted_blocks=1 00:05:01.258 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:01.258 ' 00:05:01.258 10:42:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:01.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.258 --rc genhtml_branch_coverage=1 00:05:01.258 --rc genhtml_function_coverage=1 00:05:01.258 --rc genhtml_legend=1 00:05:01.258 --rc geninfo_all_blocks=1 00:05:01.258 --rc geninfo_unexecuted_blocks=1 00:05:01.258 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:01.258 ' 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:01.258 10:42:50 -- nvmf/common.sh@7 -- # uname -s 00:05:01.258 10:42:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.258 10:42:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.258 10:42:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.258 10:42:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.258 10:42:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.258 10:42:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.258 10:42:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.258 10:42:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.258 10:42:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.258 10:42:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.258 10:42:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:01.258 10:42:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:01.258 10:42:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.258 10:42:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.258 10:42:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.258 10:42:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:01.258 10:42:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.258 10:42:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.258 10:42:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.258 10:42:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.258 10:42:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.258 10:42:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.258 10:42:50 -- paths/export.sh@5 -- # export PATH 00:05:01.258 10:42:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.258 10:42:50 -- nvmf/common.sh@46 -- # : 0 00:05:01.258 10:42:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:01.258 10:42:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:01.258 10:42:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:01.258 10:42:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.258 10:42:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.258 10:42:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:01.258 10:42:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:01.258 10:42:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:01.258 INFO: launching applications... 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1283707 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:01.258 Waiting for target to run... 00:05:01.258 10:42:50 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1283707 /var/tmp/spdk_tgt.sock 00:05:01.258 10:42:50 -- common/autotest_common.sh@829 -- # '[' -z 1283707 ']' 00:05:01.258 10:42:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:01.259 10:42:50 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:01.259 10:42:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.259 10:42:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:01.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:01.259 10:42:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.259 10:42:50 -- common/autotest_common.sh@10 -- # set +x 00:05:01.259 [2024-12-15 10:42:50.094779] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:01.259 [2024-12-15 10:42:50.094872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1283707 ] 00:05:01.259 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.826 [2024-12-15 10:42:50.536548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.826 [2024-12-15 10:42:50.626324] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:01.826 [2024-12-15 10:42:50.626432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.085 10:42:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.085 10:42:50 -- common/autotest_common.sh@862 -- # return 0 00:05:02.085 10:42:50 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:02.085 00:05:02.085 10:42:50 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:02.085 INFO: shutting down applications... 00:05:02.085 10:42:50 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:02.085 10:42:50 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:02.085 10:42:50 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:02.085 10:42:50 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1283707 ]] 00:05:02.085 10:42:50 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1283707 00:05:02.085 10:42:50 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:02.085 10:42:50 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:02.085 10:42:50 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1283707 00:05:02.085 10:42:50 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:02.653 10:42:51 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:02.653 10:42:51 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:02.653 10:42:51 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1283707 00:05:02.653 10:42:51 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:02.653 10:42:51 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:02.653 10:42:51 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:02.653 10:42:51 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:02.653 SPDK target shutdown done 00:05:02.653 10:42:51 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:02.653 Success 00:05:02.653 00:05:02.653 real 0m1.543s 00:05:02.653 user 0m1.110s 00:05:02.653 sys 0m0.596s 00:05:02.653 10:42:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.653 10:42:51 -- common/autotest_common.sh@10 -- # set +x 00:05:02.653 ************************************ 00:05:02.653 END TEST json_config_extra_key 00:05:02.653 ************************************ 00:05:02.653 10:42:51 -- spdk/autotest.sh@167 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:02.653 10:42:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.653 10:42:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.653 10:42:51 -- common/autotest_common.sh@10 -- # set +x 00:05:02.653 ************************************ 00:05:02.653 START TEST alias_rpc 00:05:02.653 ************************************ 00:05:02.653 10:42:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:02.653 * Looking for test storage... 00:05:02.653 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:02.653 10:42:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:02.653 10:42:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:02.653 10:42:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:02.653 10:42:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:02.653 10:42:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:02.653 10:42:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:02.653 10:42:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:02.653 10:42:51 -- scripts/common.sh@335 -- # IFS=.-: 00:05:02.653 10:42:51 -- scripts/common.sh@335 -- # read -ra ver1 00:05:02.653 10:42:51 -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.653 10:42:51 -- scripts/common.sh@336 -- # read -ra ver2 00:05:02.653 10:42:51 -- scripts/common.sh@337 -- # local 'op=<' 00:05:02.653 10:42:51 -- scripts/common.sh@339 -- # ver1_l=2 00:05:02.653 10:42:51 -- scripts/common.sh@340 -- # ver2_l=1 00:05:02.653 10:42:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:02.653 10:42:51 -- scripts/common.sh@343 -- # case "$op" in 00:05:02.653 10:42:51 -- scripts/common.sh@344 -- # : 1 00:05:02.653 10:42:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:02.653 10:42:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.653 10:42:51 -- scripts/common.sh@364 -- # decimal 1 00:05:02.653 10:42:51 -- scripts/common.sh@352 -- # local d=1 00:05:02.653 10:42:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.653 10:42:51 -- scripts/common.sh@354 -- # echo 1 00:05:02.653 10:42:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:02.653 10:42:51 -- scripts/common.sh@365 -- # decimal 2 00:05:02.653 10:42:51 -- scripts/common.sh@352 -- # local d=2 00:05:02.653 10:42:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.653 10:42:51 -- scripts/common.sh@354 -- # echo 2 00:05:02.653 10:42:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:02.653 10:42:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:02.653 10:42:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:02.653 10:42:51 -- scripts/common.sh@367 -- # return 0 00:05:02.653 10:42:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.653 10:42:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:02.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.653 --rc genhtml_branch_coverage=1 00:05:02.653 --rc genhtml_function_coverage=1 00:05:02.653 --rc genhtml_legend=1 00:05:02.653 --rc geninfo_all_blocks=1 00:05:02.653 --rc geninfo_unexecuted_blocks=1 00:05:02.653 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:02.653 ' 00:05:02.653 10:42:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:02.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.653 --rc genhtml_branch_coverage=1 00:05:02.653 --rc genhtml_function_coverage=1 00:05:02.653 --rc genhtml_legend=1 00:05:02.653 --rc geninfo_all_blocks=1 00:05:02.653 --rc geninfo_unexecuted_blocks=1 00:05:02.653 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:02.653 ' 00:05:02.653 10:42:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:02.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.653 --rc genhtml_branch_coverage=1 00:05:02.653 --rc genhtml_function_coverage=1 00:05:02.653 --rc genhtml_legend=1 00:05:02.653 --rc geninfo_all_blocks=1 00:05:02.653 --rc geninfo_unexecuted_blocks=1 00:05:02.653 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:02.653 ' 00:05:02.653 10:42:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:02.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.653 --rc genhtml_branch_coverage=1 00:05:02.653 --rc genhtml_function_coverage=1 00:05:02.653 --rc genhtml_legend=1 00:05:02.653 --rc geninfo_all_blocks=1 00:05:02.653 --rc geninfo_unexecuted_blocks=1 00:05:02.653 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:02.653 ' 00:05:02.653 10:42:51 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:02.653 10:42:51 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1284027 00:05:02.654 10:42:51 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1284027 00:05:02.654 10:42:51 -- common/autotest_common.sh@829 -- # '[' -z 1284027 ']' 00:05:02.654 10:42:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.654 10:42:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.654 10:42:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.654 10:42:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.654 10:42:51 -- common/autotest_common.sh@10 -- # set +x 00:05:02.913 10:42:51 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.913 [2024-12-15 10:42:51.690606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:02.913 [2024-12-15 10:42:51.690694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284027 ] 00:05:02.913 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.913 [2024-12-15 10:42:51.758174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.913 [2024-12-15 10:42:51.826117] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:02.913 [2024-12-15 10:42:51.826227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.848 10:42:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.848 10:42:52 -- common/autotest_common.sh@862 -- # return 0 00:05:03.848 10:42:52 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:03.848 10:42:52 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1284027 00:05:03.848 10:42:52 -- common/autotest_common.sh@936 -- # '[' -z 1284027 ']' 00:05:03.848 10:42:52 -- common/autotest_common.sh@940 -- # kill -0 1284027 00:05:03.848 10:42:52 -- common/autotest_common.sh@941 -- # uname 00:05:03.848 10:42:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:03.848 10:42:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1284027 00:05:03.848 10:42:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:03.848 10:42:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:03.848 10:42:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1284027' 00:05:03.848 killing process with pid 1284027 00:05:03.848 10:42:52 -- common/autotest_common.sh@955 -- # kill 1284027 00:05:03.848 10:42:52 -- common/autotest_common.sh@960 -- # wait 1284027 00:05:04.107 00:05:04.107 real 0m1.571s 00:05:04.107 user 0m1.674s 00:05:04.107 sys 0m0.446s 00:05:04.107 10:42:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.107 10:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:04.107 ************************************ 00:05:04.107 END TEST alias_rpc 00:05:04.107 ************************************ 00:05:04.107 10:42:53 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:04.107 10:42:53 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:04.107 10:42:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.107 10:42:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.107 10:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:04.107 ************************************ 00:05:04.107 START TEST spdkcli_tcp 00:05:04.107 ************************************ 00:05:04.107 10:42:53 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:04.367 * Looking for test storage... 00:05:04.367 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:04.367 10:42:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:04.367 10:42:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:04.367 10:42:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:04.367 10:42:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:04.367 10:42:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:04.367 10:42:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:04.367 10:42:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:04.367 10:42:53 -- scripts/common.sh@335 -- # IFS=.-: 00:05:04.367 10:42:53 -- scripts/common.sh@335 -- # read -ra ver1 00:05:04.367 10:42:53 -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.367 10:42:53 -- scripts/common.sh@336 -- # read -ra ver2 00:05:04.367 10:42:53 -- scripts/common.sh@337 -- # local 'op=<' 00:05:04.367 10:42:53 -- scripts/common.sh@339 -- # ver1_l=2 00:05:04.367 10:42:53 -- scripts/common.sh@340 -- # ver2_l=1 00:05:04.367 10:42:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:04.367 10:42:53 -- scripts/common.sh@343 -- # case "$op" in 00:05:04.367 10:42:53 -- scripts/common.sh@344 -- # : 1 00:05:04.367 10:42:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:04.367 10:42:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.367 10:42:53 -- scripts/common.sh@364 -- # decimal 1 00:05:04.367 10:42:53 -- scripts/common.sh@352 -- # local d=1 00:05:04.367 10:42:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.367 10:42:53 -- scripts/common.sh@354 -- # echo 1 00:05:04.367 10:42:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:04.367 10:42:53 -- scripts/common.sh@365 -- # decimal 2 00:05:04.367 10:42:53 -- scripts/common.sh@352 -- # local d=2 00:05:04.367 10:42:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.367 10:42:53 -- scripts/common.sh@354 -- # echo 2 00:05:04.368 10:42:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:04.368 10:42:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:04.368 10:42:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:04.368 10:42:53 -- scripts/common.sh@367 -- # return 0 00:05:04.368 10:42:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.368 10:42:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:04.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.368 --rc genhtml_branch_coverage=1 00:05:04.368 --rc genhtml_function_coverage=1 00:05:04.368 --rc genhtml_legend=1 00:05:04.368 --rc geninfo_all_blocks=1 00:05:04.368 --rc geninfo_unexecuted_blocks=1 00:05:04.368 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:04.368 ' 00:05:04.368 10:42:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:04.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.368 --rc genhtml_branch_coverage=1 00:05:04.368 --rc genhtml_function_coverage=1 00:05:04.368 --rc genhtml_legend=1 00:05:04.368 --rc geninfo_all_blocks=1 00:05:04.368 --rc geninfo_unexecuted_blocks=1 00:05:04.368 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:04.368 ' 00:05:04.368 10:42:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:04.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.368 --rc genhtml_branch_coverage=1 00:05:04.368 --rc genhtml_function_coverage=1 00:05:04.368 --rc genhtml_legend=1 00:05:04.368 --rc geninfo_all_blocks=1 00:05:04.368 --rc geninfo_unexecuted_blocks=1 00:05:04.368 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:04.368 ' 00:05:04.368 10:42:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:04.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.368 --rc genhtml_branch_coverage=1 00:05:04.368 --rc genhtml_function_coverage=1 00:05:04.368 --rc genhtml_legend=1 00:05:04.368 --rc geninfo_all_blocks=1 00:05:04.368 --rc geninfo_unexecuted_blocks=1 00:05:04.368 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:04.368 ' 00:05:04.368 10:42:53 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:04.368 10:42:53 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:04.368 10:42:53 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:04.368 10:42:53 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:04.368 10:42:53 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:04.368 10:42:53 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:04.368 10:42:53 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:04.368 10:42:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.368 10:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:04.368 10:42:53 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1284364 00:05:04.368 10:42:53 -- spdkcli/tcp.sh@27 -- # waitforlisten 1284364 00:05:04.368 10:42:53 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:04.368 10:42:53 -- common/autotest_common.sh@829 -- # '[' -z 1284364 ']' 00:05:04.368 10:42:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.368 10:42:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.368 10:42:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.368 10:42:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.368 10:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:04.368 [2024-12-15 10:42:53.300467] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:04.368 [2024-12-15 10:42:53.300534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284364 ] 00:05:04.368 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.368 [2024-12-15 10:42:53.367193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.627 [2024-12-15 10:42:53.438111] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:04.627 [2024-12-15 10:42:53.438252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.627 [2024-12-15 10:42:53.438254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.194 10:42:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.194 10:42:54 -- common/autotest_common.sh@862 -- # return 0 00:05:05.194 10:42:54 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:05.194 10:42:54 -- spdkcli/tcp.sh@31 -- # socat_pid=1284565 00:05:05.194 10:42:54 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:05.454 [ 00:05:05.454 "spdk_get_version", 00:05:05.454 "rpc_get_methods", 00:05:05.454 "trace_get_info", 00:05:05.454 "trace_get_tpoint_group_mask", 00:05:05.454 "trace_disable_tpoint_group", 00:05:05.454 "trace_enable_tpoint_group", 00:05:05.454 "trace_clear_tpoint_mask", 00:05:05.454 "trace_set_tpoint_mask", 00:05:05.454 "vfu_tgt_set_base_path", 00:05:05.454 "framework_get_pci_devices", 00:05:05.454 "framework_get_config", 00:05:05.454 "framework_get_subsystems", 00:05:05.454 "iobuf_get_stats", 00:05:05.454 "iobuf_set_options", 00:05:05.454 "sock_set_default_impl", 00:05:05.454 "sock_impl_set_options", 00:05:05.454 "sock_impl_get_options", 00:05:05.454 "vmd_rescan", 00:05:05.454 "vmd_remove_device", 00:05:05.454 "vmd_enable", 00:05:05.454 "accel_get_stats", 00:05:05.454 "accel_set_options", 00:05:05.454 "accel_set_driver", 00:05:05.454 "accel_crypto_key_destroy", 00:05:05.454 "accel_crypto_keys_get", 00:05:05.454 "accel_crypto_key_create", 00:05:05.454 "accel_assign_opc", 00:05:05.454 "accel_get_module_info", 00:05:05.454 "accel_get_opc_assignments", 00:05:05.454 "notify_get_notifications", 00:05:05.454 "notify_get_types", 00:05:05.454 "bdev_get_histogram", 00:05:05.454 "bdev_enable_histogram", 00:05:05.454 "bdev_set_qos_limit", 00:05:05.454 "bdev_set_qd_sampling_period", 00:05:05.454 "bdev_get_bdevs", 00:05:05.454 "bdev_reset_iostat", 00:05:05.454 "bdev_get_iostat", 00:05:05.454 "bdev_examine", 00:05:05.454 "bdev_wait_for_examine", 00:05:05.454 "bdev_set_options", 00:05:05.454 "scsi_get_devices", 00:05:05.454 "thread_set_cpumask", 00:05:05.454 "framework_get_scheduler", 00:05:05.454 "framework_set_scheduler", 00:05:05.454 "framework_get_reactors", 00:05:05.454 "thread_get_io_channels", 00:05:05.454 "thread_get_pollers", 00:05:05.454 "thread_get_stats", 00:05:05.454 "framework_monitor_context_switch", 00:05:05.454 "spdk_kill_instance", 00:05:05.454 "log_enable_timestamps", 00:05:05.454 "log_get_flags", 00:05:05.454 "log_clear_flag", 00:05:05.454 "log_set_flag", 00:05:05.454 "log_get_level", 00:05:05.454 "log_set_level", 00:05:05.454 "log_get_print_level", 00:05:05.454 "log_set_print_level", 00:05:05.454 "framework_enable_cpumask_locks", 00:05:05.454 "framework_disable_cpumask_locks", 00:05:05.454 "framework_wait_init", 00:05:05.454 "framework_start_init", 00:05:05.454 "virtio_blk_create_transport", 00:05:05.454 "virtio_blk_get_transports", 00:05:05.454 "vhost_controller_set_coalescing", 00:05:05.454 "vhost_get_controllers", 00:05:05.454 "vhost_delete_controller", 00:05:05.454 "vhost_create_blk_controller", 00:05:05.454 "vhost_scsi_controller_remove_target", 00:05:05.454 "vhost_scsi_controller_add_target", 00:05:05.454 "vhost_start_scsi_controller", 00:05:05.454 "vhost_create_scsi_controller", 00:05:05.454 "ublk_recover_disk", 00:05:05.454 "ublk_get_disks", 00:05:05.454 "ublk_stop_disk", 00:05:05.454 "ublk_start_disk", 00:05:05.454 "ublk_destroy_target", 00:05:05.454 "ublk_create_target", 00:05:05.454 "nbd_get_disks", 00:05:05.454 "nbd_stop_disk", 00:05:05.454 "nbd_start_disk", 00:05:05.454 "env_dpdk_get_mem_stats", 00:05:05.454 "nvmf_subsystem_get_listeners", 00:05:05.454 "nvmf_subsystem_get_qpairs", 00:05:05.454 "nvmf_subsystem_get_controllers", 00:05:05.454 "nvmf_get_stats", 00:05:05.454 "nvmf_get_transports", 00:05:05.454 "nvmf_create_transport", 00:05:05.454 "nvmf_get_targets", 00:05:05.454 "nvmf_delete_target", 00:05:05.454 "nvmf_create_target", 00:05:05.454 "nvmf_subsystem_allow_any_host", 00:05:05.454 "nvmf_subsystem_remove_host", 00:05:05.454 "nvmf_subsystem_add_host", 00:05:05.454 "nvmf_subsystem_remove_ns", 00:05:05.454 "nvmf_subsystem_add_ns", 00:05:05.454 "nvmf_subsystem_listener_set_ana_state", 00:05:05.454 "nvmf_discovery_get_referrals", 00:05:05.454 "nvmf_discovery_remove_referral", 00:05:05.454 "nvmf_discovery_add_referral", 00:05:05.454 "nvmf_subsystem_remove_listener", 00:05:05.454 "nvmf_subsystem_add_listener", 00:05:05.454 "nvmf_delete_subsystem", 00:05:05.454 "nvmf_create_subsystem", 00:05:05.454 "nvmf_get_subsystems", 00:05:05.454 "nvmf_set_crdt", 00:05:05.454 "nvmf_set_config", 00:05:05.454 "nvmf_set_max_subsystems", 00:05:05.454 "iscsi_set_options", 00:05:05.454 "iscsi_get_auth_groups", 00:05:05.454 "iscsi_auth_group_remove_secret", 00:05:05.454 "iscsi_auth_group_add_secret", 00:05:05.454 "iscsi_delete_auth_group", 00:05:05.454 "iscsi_create_auth_group", 00:05:05.454 "iscsi_set_discovery_auth", 00:05:05.454 "iscsi_get_options", 00:05:05.454 "iscsi_target_node_request_logout", 00:05:05.454 "iscsi_target_node_set_redirect", 00:05:05.454 "iscsi_target_node_set_auth", 00:05:05.454 "iscsi_target_node_add_lun", 00:05:05.454 "iscsi_get_connections", 00:05:05.454 "iscsi_portal_group_set_auth", 00:05:05.454 "iscsi_start_portal_group", 00:05:05.454 "iscsi_delete_portal_group", 00:05:05.454 "iscsi_create_portal_group", 00:05:05.454 "iscsi_get_portal_groups", 00:05:05.454 "iscsi_delete_target_node", 00:05:05.454 "iscsi_target_node_remove_pg_ig_maps", 00:05:05.454 "iscsi_target_node_add_pg_ig_maps", 00:05:05.454 "iscsi_create_target_node", 00:05:05.454 "iscsi_get_target_nodes", 00:05:05.454 "iscsi_delete_initiator_group", 00:05:05.454 "iscsi_initiator_group_remove_initiators", 00:05:05.454 "iscsi_initiator_group_add_initiators", 00:05:05.454 "iscsi_create_initiator_group", 00:05:05.454 "iscsi_get_initiator_groups", 00:05:05.454 "vfu_virtio_create_scsi_endpoint", 00:05:05.454 "vfu_virtio_scsi_remove_target", 00:05:05.454 "vfu_virtio_scsi_add_target", 00:05:05.454 "vfu_virtio_create_blk_endpoint", 00:05:05.454 "vfu_virtio_delete_endpoint", 00:05:05.454 "iaa_scan_accel_module", 00:05:05.454 "dsa_scan_accel_module", 00:05:05.454 "ioat_scan_accel_module", 00:05:05.454 "accel_error_inject_error", 00:05:05.454 "bdev_iscsi_delete", 00:05:05.454 "bdev_iscsi_create", 00:05:05.454 "bdev_iscsi_set_options", 00:05:05.454 "bdev_virtio_attach_controller", 00:05:05.454 "bdev_virtio_scsi_get_devices", 00:05:05.454 "bdev_virtio_detach_controller", 00:05:05.454 "bdev_virtio_blk_set_hotplug", 00:05:05.454 "bdev_ftl_set_property", 00:05:05.454 "bdev_ftl_get_properties", 00:05:05.454 "bdev_ftl_get_stats", 00:05:05.454 "bdev_ftl_unmap", 00:05:05.454 "bdev_ftl_unload", 00:05:05.454 "bdev_ftl_delete", 00:05:05.454 "bdev_ftl_load", 00:05:05.454 "bdev_ftl_create", 00:05:05.454 "bdev_aio_delete", 00:05:05.454 "bdev_aio_rescan", 00:05:05.454 "bdev_aio_create", 00:05:05.455 "blobfs_create", 00:05:05.455 "blobfs_detect", 00:05:05.455 "blobfs_set_cache_size", 00:05:05.455 "bdev_zone_block_delete", 00:05:05.455 "bdev_zone_block_create", 00:05:05.455 "bdev_delay_delete", 00:05:05.455 "bdev_delay_create", 00:05:05.455 "bdev_delay_update_latency", 00:05:05.455 "bdev_split_delete", 00:05:05.455 "bdev_split_create", 00:05:05.455 "bdev_error_inject_error", 00:05:05.455 "bdev_error_delete", 00:05:05.455 "bdev_error_create", 00:05:05.455 "bdev_raid_set_options", 00:05:05.455 "bdev_raid_remove_base_bdev", 00:05:05.455 "bdev_raid_add_base_bdev", 00:05:05.455 "bdev_raid_delete", 00:05:05.455 "bdev_raid_create", 00:05:05.455 "bdev_raid_get_bdevs", 00:05:05.455 "bdev_lvol_grow_lvstore", 00:05:05.455 "bdev_lvol_get_lvols", 00:05:05.455 "bdev_lvol_get_lvstores", 00:05:05.455 "bdev_lvol_delete", 00:05:05.455 "bdev_lvol_set_read_only", 00:05:05.455 "bdev_lvol_resize", 00:05:05.455 "bdev_lvol_decouple_parent", 00:05:05.455 "bdev_lvol_inflate", 00:05:05.455 "bdev_lvol_rename", 00:05:05.455 "bdev_lvol_clone_bdev", 00:05:05.455 "bdev_lvol_clone", 00:05:05.455 "bdev_lvol_snapshot", 00:05:05.455 "bdev_lvol_create", 00:05:05.455 "bdev_lvol_delete_lvstore", 00:05:05.455 "bdev_lvol_rename_lvstore", 00:05:05.455 "bdev_lvol_create_lvstore", 00:05:05.455 "bdev_passthru_delete", 00:05:05.455 "bdev_passthru_create", 00:05:05.455 "bdev_nvme_cuse_unregister", 00:05:05.455 "bdev_nvme_cuse_register", 00:05:05.455 "bdev_opal_new_user", 00:05:05.455 "bdev_opal_set_lock_state", 00:05:05.455 "bdev_opal_delete", 00:05:05.455 "bdev_opal_get_info", 00:05:05.455 "bdev_opal_create", 00:05:05.455 "bdev_nvme_opal_revert", 00:05:05.455 "bdev_nvme_opal_init", 00:05:05.455 "bdev_nvme_send_cmd", 00:05:05.455 "bdev_nvme_get_path_iostat", 00:05:05.455 "bdev_nvme_get_mdns_discovery_info", 00:05:05.455 "bdev_nvme_stop_mdns_discovery", 00:05:05.455 "bdev_nvme_start_mdns_discovery", 00:05:05.455 "bdev_nvme_set_multipath_policy", 00:05:05.455 "bdev_nvme_set_preferred_path", 00:05:05.455 "bdev_nvme_get_io_paths", 00:05:05.455 "bdev_nvme_remove_error_injection", 00:05:05.455 "bdev_nvme_add_error_injection", 00:05:05.455 "bdev_nvme_get_discovery_info", 00:05:05.455 "bdev_nvme_stop_discovery", 00:05:05.455 "bdev_nvme_start_discovery", 00:05:05.455 "bdev_nvme_get_controller_health_info", 00:05:05.455 "bdev_nvme_disable_controller", 00:05:05.455 "bdev_nvme_enable_controller", 00:05:05.455 "bdev_nvme_reset_controller", 00:05:05.455 "bdev_nvme_get_transport_statistics", 00:05:05.455 "bdev_nvme_apply_firmware", 00:05:05.455 "bdev_nvme_detach_controller", 00:05:05.455 "bdev_nvme_get_controllers", 00:05:05.455 "bdev_nvme_attach_controller", 00:05:05.455 "bdev_nvme_set_hotplug", 00:05:05.455 "bdev_nvme_set_options", 00:05:05.455 "bdev_null_resize", 00:05:05.455 "bdev_null_delete", 00:05:05.455 "bdev_null_create", 00:05:05.455 "bdev_malloc_delete", 00:05:05.455 "bdev_malloc_create" 00:05:05.455 ] 00:05:05.455 10:42:54 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:05.455 10:42:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.455 10:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:05.455 10:42:54 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:05.455 10:42:54 -- spdkcli/tcp.sh@38 -- # killprocess 1284364 00:05:05.455 10:42:54 -- common/autotest_common.sh@936 -- # '[' -z 1284364 ']' 00:05:05.455 10:42:54 -- common/autotest_common.sh@940 -- # kill -0 1284364 00:05:05.455 10:42:54 -- common/autotest_common.sh@941 -- # uname 00:05:05.455 10:42:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:05.455 10:42:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1284364 00:05:05.455 10:42:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:05.455 10:42:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:05.455 10:42:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1284364' 00:05:05.455 killing process with pid 1284364 00:05:05.455 10:42:54 -- common/autotest_common.sh@955 -- # kill 1284364 00:05:05.455 10:42:54 -- common/autotest_common.sh@960 -- # wait 1284364 00:05:06.024 00:05:06.024 real 0m1.637s 00:05:06.024 user 0m2.995s 00:05:06.024 sys 0m0.512s 00:05:06.024 10:42:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:06.024 10:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:06.024 ************************************ 00:05:06.024 END TEST spdkcli_tcp 00:05:06.024 ************************************ 00:05:06.024 10:42:54 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.024 10:42:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.024 10:42:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.024 10:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:06.024 ************************************ 00:05:06.024 START TEST dpdk_mem_utility 00:05:06.024 ************************************ 00:05:06.024 10:42:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.024 * Looking for test storage... 00:05:06.024 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:06.024 10:42:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:06.024 10:42:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:06.024 10:42:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:06.024 10:42:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:06.024 10:42:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:06.024 10:42:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:06.024 10:42:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:06.024 10:42:54 -- scripts/common.sh@335 -- # IFS=.-: 00:05:06.024 10:42:54 -- scripts/common.sh@335 -- # read -ra ver1 00:05:06.024 10:42:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.024 10:42:54 -- scripts/common.sh@336 -- # read -ra ver2 00:05:06.024 10:42:54 -- scripts/common.sh@337 -- # local 'op=<' 00:05:06.024 10:42:54 -- scripts/common.sh@339 -- # ver1_l=2 00:05:06.024 10:42:54 -- scripts/common.sh@340 -- # ver2_l=1 00:05:06.024 10:42:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:06.024 10:42:54 -- scripts/common.sh@343 -- # case "$op" in 00:05:06.024 10:42:54 -- scripts/common.sh@344 -- # : 1 00:05:06.024 10:42:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:06.024 10:42:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.024 10:42:54 -- scripts/common.sh@364 -- # decimal 1 00:05:06.024 10:42:54 -- scripts/common.sh@352 -- # local d=1 00:05:06.024 10:42:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.024 10:42:54 -- scripts/common.sh@354 -- # echo 1 00:05:06.024 10:42:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:06.024 10:42:54 -- scripts/common.sh@365 -- # decimal 2 00:05:06.024 10:42:54 -- scripts/common.sh@352 -- # local d=2 00:05:06.024 10:42:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.024 10:42:54 -- scripts/common.sh@354 -- # echo 2 00:05:06.024 10:42:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:06.024 10:42:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:06.024 10:42:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:06.024 10:42:54 -- scripts/common.sh@367 -- # return 0 00:05:06.024 10:42:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.024 10:42:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:06.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.024 --rc genhtml_branch_coverage=1 00:05:06.024 --rc genhtml_function_coverage=1 00:05:06.024 --rc genhtml_legend=1 00:05:06.024 --rc geninfo_all_blocks=1 00:05:06.024 --rc geninfo_unexecuted_blocks=1 00:05:06.024 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:06.024 ' 00:05:06.024 10:42:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:06.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.024 --rc genhtml_branch_coverage=1 00:05:06.024 --rc genhtml_function_coverage=1 00:05:06.024 --rc genhtml_legend=1 00:05:06.024 --rc geninfo_all_blocks=1 00:05:06.024 --rc geninfo_unexecuted_blocks=1 00:05:06.024 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:06.024 ' 00:05:06.024 10:42:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:06.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.024 --rc genhtml_branch_coverage=1 00:05:06.024 --rc genhtml_function_coverage=1 00:05:06.024 --rc genhtml_legend=1 00:05:06.024 --rc geninfo_all_blocks=1 00:05:06.024 --rc geninfo_unexecuted_blocks=1 00:05:06.024 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:06.024 ' 00:05:06.024 10:42:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:06.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.024 --rc genhtml_branch_coverage=1 00:05:06.024 --rc genhtml_function_coverage=1 00:05:06.024 --rc genhtml_legend=1 00:05:06.024 --rc geninfo_all_blocks=1 00:05:06.024 --rc geninfo_unexecuted_blocks=1 00:05:06.024 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:06.024 ' 00:05:06.024 10:42:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:06.024 10:42:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1284708 00:05:06.024 10:42:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1284708 00:05:06.024 10:42:54 -- common/autotest_common.sh@829 -- # '[' -z 1284708 ']' 00:05:06.024 10:42:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.024 10:42:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.024 10:42:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.024 10:42:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.024 10:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:06.024 10:42:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.024 [2024-12-15 10:42:54.986598] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:06.024 [2024-12-15 10:42:54.986689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284708 ] 00:05:06.024 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.351 [2024-12-15 10:42:55.056408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.351 [2024-12-15 10:42:55.132221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:06.351 [2024-12-15 10:42:55.132332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.003 10:42:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.003 10:42:55 -- common/autotest_common.sh@862 -- # return 0 00:05:07.003 10:42:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:07.003 10:42:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:07.003 10:42:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.003 10:42:55 -- common/autotest_common.sh@10 -- # set +x 00:05:07.003 { 00:05:07.003 "filename": "/tmp/spdk_mem_dump.txt" 00:05:07.003 } 00:05:07.003 10:42:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.003 10:42:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:07.003 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:07.003 1 heaps totaling size 814.000000 MiB 00:05:07.003 size: 814.000000 MiB heap id: 0 00:05:07.003 end heaps---------- 00:05:07.003 8 mempools totaling size 598.116089 MiB 00:05:07.003 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:07.003 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:07.003 size: 84.521057 MiB name: bdev_io_1284708 00:05:07.003 size: 51.011292 MiB name: evtpool_1284708 00:05:07.003 size: 50.003479 MiB name: msgpool_1284708 00:05:07.003 size: 21.763794 MiB name: PDU_Pool 00:05:07.003 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:07.003 size: 0.026123 MiB name: Session_Pool 00:05:07.003 end mempools------- 00:05:07.003 6 memzones totaling size 4.142822 MiB 00:05:07.003 size: 1.000366 MiB name: RG_ring_0_1284708 00:05:07.003 size: 1.000366 MiB name: RG_ring_1_1284708 00:05:07.003 size: 1.000366 MiB name: RG_ring_4_1284708 00:05:07.003 size: 1.000366 MiB name: RG_ring_5_1284708 00:05:07.003 size: 0.125366 MiB name: RG_ring_2_1284708 00:05:07.003 size: 0.015991 MiB name: RG_ring_3_1284708 00:05:07.003 end memzones------- 00:05:07.003 10:42:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:07.003 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:07.003 list of free elements. size: 12.519348 MiB 00:05:07.003 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:07.003 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:07.003 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:07.003 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:07.003 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:07.003 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:07.003 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:07.003 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:07.003 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:07.003 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:07.003 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:07.003 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:07.003 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:07.003 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:07.003 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:07.003 list of standard malloc elements. size: 199.218079 MiB 00:05:07.003 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:07.004 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:07.004 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:07.004 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:07.004 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:07.004 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:07.004 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:07.004 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:07.004 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:07.004 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:07.004 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:07.004 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:07.004 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:07.004 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:07.004 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:07.004 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:07.004 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:07.004 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:07.004 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:07.004 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:07.004 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:07.004 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:07.004 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:07.004 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:07.004 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:07.004 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:07.004 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:07.004 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:07.004 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:07.004 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:07.004 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:07.004 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:07.004 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:07.004 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:07.004 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:07.004 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:07.004 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:07.004 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:07.004 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:07.004 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:07.004 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:07.004 list of memzone associated elements. size: 602.262573 MiB 00:05:07.004 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:07.004 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:07.004 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:07.004 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:07.004 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:07.004 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1284708_0 00:05:07.004 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:07.004 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1284708_0 00:05:07.004 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:07.004 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1284708_0 00:05:07.004 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:07.004 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:07.004 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:07.004 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:07.004 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:07.004 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1284708 00:05:07.004 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:07.004 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1284708 00:05:07.004 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:07.004 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1284708 00:05:07.004 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:07.004 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:07.004 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:07.004 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:07.004 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:07.004 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:07.004 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:07.004 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:07.004 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:07.004 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1284708 00:05:07.004 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:07.004 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1284708 00:05:07.004 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:07.004 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1284708 00:05:07.004 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:07.004 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1284708 00:05:07.004 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:07.004 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1284708 00:05:07.004 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:07.004 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:07.004 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:07.004 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:07.004 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:07.004 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:07.004 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:07.004 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1284708 00:05:07.004 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:07.004 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:07.004 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:07.004 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:07.004 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:07.004 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1284708 00:05:07.004 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:07.004 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:07.004 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:07.004 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1284708 00:05:07.004 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:07.004 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1284708 00:05:07.004 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:07.004 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:07.004 10:42:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:07.004 10:42:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1284708 00:05:07.004 10:42:55 -- common/autotest_common.sh@936 -- # '[' -z 1284708 ']' 00:05:07.004 10:42:55 -- common/autotest_common.sh@940 -- # kill -0 1284708 00:05:07.004 10:42:55 -- common/autotest_common.sh@941 -- # uname 00:05:07.004 10:42:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:07.004 10:42:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1284708 00:05:07.004 10:42:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:07.004 10:42:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:07.004 10:42:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1284708' 00:05:07.004 killing process with pid 1284708 00:05:07.004 10:42:55 -- common/autotest_common.sh@955 -- # kill 1284708 00:05:07.004 10:42:55 -- common/autotest_common.sh@960 -- # wait 1284708 00:05:07.573 00:05:07.573 real 0m1.505s 00:05:07.573 user 0m1.539s 00:05:07.573 sys 0m0.468s 00:05:07.573 10:42:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.573 10:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:07.573 ************************************ 00:05:07.573 END TEST dpdk_mem_utility 00:05:07.573 ************************************ 00:05:07.573 10:42:56 -- spdk/autotest.sh@174 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:07.573 10:42:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.573 10:42:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.573 10:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:07.573 ************************************ 00:05:07.573 START TEST event 00:05:07.573 ************************************ 00:05:07.573 10:42:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:07.573 * Looking for test storage... 00:05:07.573 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:07.573 10:42:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:07.573 10:42:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:07.573 10:42:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:07.573 10:42:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:07.573 10:42:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:07.573 10:42:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:07.573 10:42:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:07.573 10:42:56 -- scripts/common.sh@335 -- # IFS=.-: 00:05:07.573 10:42:56 -- scripts/common.sh@335 -- # read -ra ver1 00:05:07.573 10:42:56 -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.573 10:42:56 -- scripts/common.sh@336 -- # read -ra ver2 00:05:07.573 10:42:56 -- scripts/common.sh@337 -- # local 'op=<' 00:05:07.573 10:42:56 -- scripts/common.sh@339 -- # ver1_l=2 00:05:07.573 10:42:56 -- scripts/common.sh@340 -- # ver2_l=1 00:05:07.573 10:42:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:07.573 10:42:56 -- scripts/common.sh@343 -- # case "$op" in 00:05:07.573 10:42:56 -- scripts/common.sh@344 -- # : 1 00:05:07.573 10:42:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:07.573 10:42:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.573 10:42:56 -- scripts/common.sh@364 -- # decimal 1 00:05:07.573 10:42:56 -- scripts/common.sh@352 -- # local d=1 00:05:07.573 10:42:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.573 10:42:56 -- scripts/common.sh@354 -- # echo 1 00:05:07.573 10:42:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:07.573 10:42:56 -- scripts/common.sh@365 -- # decimal 2 00:05:07.573 10:42:56 -- scripts/common.sh@352 -- # local d=2 00:05:07.573 10:42:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.573 10:42:56 -- scripts/common.sh@354 -- # echo 2 00:05:07.573 10:42:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:07.573 10:42:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:07.573 10:42:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:07.573 10:42:56 -- scripts/common.sh@367 -- # return 0 00:05:07.573 10:42:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.573 10:42:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:07.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.573 --rc genhtml_branch_coverage=1 00:05:07.573 --rc genhtml_function_coverage=1 00:05:07.573 --rc genhtml_legend=1 00:05:07.573 --rc geninfo_all_blocks=1 00:05:07.573 --rc geninfo_unexecuted_blocks=1 00:05:07.573 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:07.573 ' 00:05:07.573 10:42:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:07.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.573 --rc genhtml_branch_coverage=1 00:05:07.573 --rc genhtml_function_coverage=1 00:05:07.573 --rc genhtml_legend=1 00:05:07.573 --rc geninfo_all_blocks=1 00:05:07.573 --rc geninfo_unexecuted_blocks=1 00:05:07.573 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:07.573 ' 00:05:07.573 10:42:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:07.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.573 --rc genhtml_branch_coverage=1 00:05:07.573 --rc genhtml_function_coverage=1 00:05:07.573 --rc genhtml_legend=1 00:05:07.573 --rc geninfo_all_blocks=1 00:05:07.573 --rc geninfo_unexecuted_blocks=1 00:05:07.573 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:07.573 ' 00:05:07.573 10:42:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:07.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.573 --rc genhtml_branch_coverage=1 00:05:07.573 --rc genhtml_function_coverage=1 00:05:07.573 --rc genhtml_legend=1 00:05:07.573 --rc geninfo_all_blocks=1 00:05:07.573 --rc geninfo_unexecuted_blocks=1 00:05:07.573 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:07.573 ' 00:05:07.573 10:42:56 -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:07.573 10:42:56 -- bdev/nbd_common.sh@6 -- # set -e 00:05:07.573 10:42:56 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:07.573 10:42:56 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:07.573 10:42:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.573 10:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:07.573 ************************************ 00:05:07.573 START TEST event_perf 00:05:07.573 ************************************ 00:05:07.573 10:42:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:07.573 Running I/O for 1 seconds...[2024-12-15 10:42:56.538744] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:07.573 [2024-12-15 10:42:56.538831] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285047 ] 00:05:07.573 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.833 [2024-12-15 10:42:56.609471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:07.833 [2024-12-15 10:42:56.680844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.833 [2024-12-15 10:42:56.680940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.833 [2024-12-15 10:42:56.681021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:07.833 [2024-12-15 10:42:56.681023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.769 Running I/O for 1 seconds... 00:05:08.769 lcore 0: 189418 00:05:08.769 lcore 1: 189417 00:05:08.769 lcore 2: 189418 00:05:08.769 lcore 3: 189416 00:05:08.769 done. 00:05:08.769 00:05:08.769 real 0m1.226s 00:05:08.769 user 0m4.131s 00:05:08.769 sys 0m0.092s 00:05:08.769 10:42:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.769 10:42:57 -- common/autotest_common.sh@10 -- # set +x 00:05:08.769 ************************************ 00:05:08.769 END TEST event_perf 00:05:08.769 ************************************ 00:05:09.028 10:42:57 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:09.028 10:42:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:09.028 10:42:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.028 10:42:57 -- common/autotest_common.sh@10 -- # set +x 00:05:09.028 ************************************ 00:05:09.028 START TEST event_reactor 00:05:09.028 ************************************ 00:05:09.028 10:42:57 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:09.028 [2024-12-15 10:42:57.799484] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:09.028 [2024-12-15 10:42:57.799546] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285332 ] 00:05:09.028 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.028 [2024-12-15 10:42:57.865999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.028 [2024-12-15 10:42:57.934025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.406 test_start 00:05:10.406 oneshot 00:05:10.406 tick 100 00:05:10.406 tick 100 00:05:10.406 tick 250 00:05:10.406 tick 100 00:05:10.406 tick 100 00:05:10.406 tick 100 00:05:10.406 tick 250 00:05:10.406 tick 500 00:05:10.406 tick 100 00:05:10.406 tick 100 00:05:10.406 tick 250 00:05:10.406 tick 100 00:05:10.406 tick 100 00:05:10.406 test_end 00:05:10.406 00:05:10.406 real 0m1.206s 00:05:10.406 user 0m1.122s 00:05:10.406 sys 0m0.079s 00:05:10.406 10:42:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.406 10:42:58 -- common/autotest_common.sh@10 -- # set +x 00:05:10.406 ************************************ 00:05:10.406 END TEST event_reactor 00:05:10.406 ************************************ 00:05:10.406 10:42:59 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.406 10:42:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:10.406 10:42:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.406 10:42:59 -- common/autotest_common.sh@10 -- # set +x 00:05:10.406 ************************************ 00:05:10.406 START TEST event_reactor_perf 00:05:10.406 ************************************ 00:05:10.406 10:42:59 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.406 [2024-12-15 10:42:59.055975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:10.406 [2024-12-15 10:42:59.056064] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285616 ] 00:05:10.406 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.406 [2024-12-15 10:42:59.127778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.406 [2024-12-15 10:42:59.196240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.343 test_start 00:05:11.343 test_end 00:05:11.343 Performance: 958647 events per second 00:05:11.343 00:05:11.343 real 0m1.226s 00:05:11.343 user 0m1.129s 00:05:11.343 sys 0m0.092s 00:05:11.343 10:43:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.343 10:43:00 -- common/autotest_common.sh@10 -- # set +x 00:05:11.343 ************************************ 00:05:11.343 END TEST event_reactor_perf 00:05:11.343 ************************************ 00:05:11.343 10:43:00 -- event/event.sh@49 -- # uname -s 00:05:11.343 10:43:00 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:11.343 10:43:00 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:11.343 10:43:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.343 10:43:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.343 10:43:00 -- common/autotest_common.sh@10 -- # set +x 00:05:11.343 ************************************ 00:05:11.344 START TEST event_scheduler 00:05:11.344 ************************************ 00:05:11.344 10:43:00 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:11.603 * Looking for test storage... 00:05:11.603 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:11.603 10:43:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:11.603 10:43:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:11.603 10:43:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:11.603 10:43:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:11.603 10:43:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:11.603 10:43:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:11.603 10:43:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:11.603 10:43:00 -- scripts/common.sh@335 -- # IFS=.-: 00:05:11.603 10:43:00 -- scripts/common.sh@335 -- # read -ra ver1 00:05:11.603 10:43:00 -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.603 10:43:00 -- scripts/common.sh@336 -- # read -ra ver2 00:05:11.603 10:43:00 -- scripts/common.sh@337 -- # local 'op=<' 00:05:11.603 10:43:00 -- scripts/common.sh@339 -- # ver1_l=2 00:05:11.603 10:43:00 -- scripts/common.sh@340 -- # ver2_l=1 00:05:11.603 10:43:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:11.603 10:43:00 -- scripts/common.sh@343 -- # case "$op" in 00:05:11.603 10:43:00 -- scripts/common.sh@344 -- # : 1 00:05:11.603 10:43:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:11.603 10:43:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.603 10:43:00 -- scripts/common.sh@364 -- # decimal 1 00:05:11.603 10:43:00 -- scripts/common.sh@352 -- # local d=1 00:05:11.603 10:43:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.603 10:43:00 -- scripts/common.sh@354 -- # echo 1 00:05:11.603 10:43:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:11.603 10:43:00 -- scripts/common.sh@365 -- # decimal 2 00:05:11.603 10:43:00 -- scripts/common.sh@352 -- # local d=2 00:05:11.603 10:43:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.603 10:43:00 -- scripts/common.sh@354 -- # echo 2 00:05:11.603 10:43:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:11.603 10:43:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:11.603 10:43:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:11.603 10:43:00 -- scripts/common.sh@367 -- # return 0 00:05:11.603 10:43:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.604 10:43:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:11.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.604 --rc genhtml_branch_coverage=1 00:05:11.604 --rc genhtml_function_coverage=1 00:05:11.604 --rc genhtml_legend=1 00:05:11.604 --rc geninfo_all_blocks=1 00:05:11.604 --rc geninfo_unexecuted_blocks=1 00:05:11.604 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:11.604 ' 00:05:11.604 10:43:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:11.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.604 --rc genhtml_branch_coverage=1 00:05:11.604 --rc genhtml_function_coverage=1 00:05:11.604 --rc genhtml_legend=1 00:05:11.604 --rc geninfo_all_blocks=1 00:05:11.604 --rc geninfo_unexecuted_blocks=1 00:05:11.604 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:11.604 ' 00:05:11.604 10:43:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:11.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.604 --rc genhtml_branch_coverage=1 00:05:11.604 --rc genhtml_function_coverage=1 00:05:11.604 --rc genhtml_legend=1 00:05:11.604 --rc geninfo_all_blocks=1 00:05:11.604 --rc geninfo_unexecuted_blocks=1 00:05:11.604 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:11.604 ' 00:05:11.604 10:43:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:11.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.604 --rc genhtml_branch_coverage=1 00:05:11.604 --rc genhtml_function_coverage=1 00:05:11.604 --rc genhtml_legend=1 00:05:11.604 --rc geninfo_all_blocks=1 00:05:11.604 --rc geninfo_unexecuted_blocks=1 00:05:11.604 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:11.604 ' 00:05:11.604 10:43:00 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:11.604 10:43:00 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1285946 00:05:11.604 10:43:00 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:11.604 10:43:00 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.604 10:43:00 -- scheduler/scheduler.sh@37 -- # waitforlisten 1285946 00:05:11.604 10:43:00 -- common/autotest_common.sh@829 -- # '[' -z 1285946 ']' 00:05:11.604 10:43:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.604 10:43:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.604 10:43:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.604 10:43:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.604 10:43:00 -- common/autotest_common.sh@10 -- # set +x 00:05:11.604 [2024-12-15 10:43:00.508557] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:11.604 [2024-12-15 10:43:00.508644] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285946 ] 00:05:11.604 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.604 [2024-12-15 10:43:00.574251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:11.863 [2024-12-15 10:43:00.650923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.863 [2024-12-15 10:43:00.651010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.863 [2024-12-15 10:43:00.651091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.863 [2024-12-15 10:43:00.651093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.431 10:43:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.431 10:43:01 -- common/autotest_common.sh@862 -- # return 0 00:05:12.431 10:43:01 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:12.431 10:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.431 10:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:12.431 POWER: Env isn't set yet! 00:05:12.431 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:12.431 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:12.432 POWER: Cannot set governor of lcore 0 to userspace 00:05:12.432 POWER: Attempting to initialise PSTAT power management... 00:05:12.432 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:12.432 POWER: Initialized successfully for lcore 0 power management 00:05:12.432 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:12.432 POWER: Initialized successfully for lcore 1 power management 00:05:12.432 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:12.432 POWER: Initialized successfully for lcore 2 power management 00:05:12.432 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:12.432 POWER: Initialized successfully for lcore 3 power management 00:05:12.432 [2024-12-15 10:43:01.398658] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:12.432 [2024-12-15 10:43:01.398672] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:12.432 [2024-12-15 10:43:01.398681] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:12.432 10:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.432 10:43:01 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:12.432 10:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.432 10:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:12.691 [2024-12-15 10:43:01.466621] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:12.691 10:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.691 10:43:01 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:12.691 10:43:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.691 10:43:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.691 10:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:12.691 ************************************ 00:05:12.691 START TEST scheduler_create_thread 00:05:12.691 ************************************ 00:05:12.691 10:43:01 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:12.691 10:43:01 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:12.691 10:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.691 10:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:12.691 2 00:05:12.691 10:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.691 10:43:01 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:12.691 10:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.691 10:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:12.691 3 00:05:12.691 10:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.691 10:43:01 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:12.691 10:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.691 10:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:12.691 4 00:05:12.691 10:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.691 10:43:01 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:12.691 10:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.691 10:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:12.691 5 00:05:12.691 10:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.691 10:43:01 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:12.691 10:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.691 10:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:12.691 6 00:05:12.691 10:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.691 10:43:01 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:12.691 10:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.691 10:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:12.691 7 00:05:12.691 10:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.691 10:43:01 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:12.691 10:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.691 10:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:12.691 8 00:05:12.691 10:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.691 10:43:01 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:12.691 10:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.691 10:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:12.691 9 00:05:12.691 10:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.691 10:43:01 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:12.691 10:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.691 10:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:12.691 10 00:05:12.691 10:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.691 10:43:01 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:12.691 10:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.691 10:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:12.691 10:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.691 10:43:01 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:12.691 10:43:01 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:12.691 10:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.691 10:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:13.628 10:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.628 10:43:02 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:13.628 10:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.628 10:43:02 -- common/autotest_common.sh@10 -- # set +x 00:05:15.005 10:43:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.005 10:43:03 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:15.005 10:43:03 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:15.005 10:43:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.005 10:43:03 -- common/autotest_common.sh@10 -- # set +x 00:05:15.942 10:43:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.942 00:05:15.942 real 0m3.382s 00:05:15.942 user 0m0.021s 00:05:15.942 sys 0m0.010s 00:05:15.942 10:43:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.942 10:43:04 -- common/autotest_common.sh@10 -- # set +x 00:05:15.942 ************************************ 00:05:15.942 END TEST scheduler_create_thread 00:05:15.942 ************************************ 00:05:15.942 10:43:04 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:15.942 10:43:04 -- scheduler/scheduler.sh@46 -- # killprocess 1285946 00:05:15.942 10:43:04 -- common/autotest_common.sh@936 -- # '[' -z 1285946 ']' 00:05:15.942 10:43:04 -- common/autotest_common.sh@940 -- # kill -0 1285946 00:05:15.942 10:43:04 -- common/autotest_common.sh@941 -- # uname 00:05:15.942 10:43:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:15.942 10:43:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1285946 00:05:16.201 10:43:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:16.201 10:43:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:16.201 10:43:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1285946' 00:05:16.201 killing process with pid 1285946 00:05:16.201 10:43:04 -- common/autotest_common.sh@955 -- # kill 1285946 00:05:16.201 10:43:04 -- common/autotest_common.sh@960 -- # wait 1285946 00:05:16.461 [2024-12-15 10:43:05.238402] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:16.461 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:16.461 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:16.461 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:16.461 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:16.461 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:16.461 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:16.461 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:16.461 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:16.461 00:05:16.461 real 0m5.147s 00:05:16.461 user 0m10.613s 00:05:16.461 sys 0m0.401s 00:05:16.461 10:43:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.461 10:43:05 -- common/autotest_common.sh@10 -- # set +x 00:05:16.461 ************************************ 00:05:16.461 END TEST event_scheduler 00:05:16.461 ************************************ 00:05:16.720 10:43:05 -- event/event.sh@51 -- # modprobe -n nbd 00:05:16.720 10:43:05 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:16.720 10:43:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.720 10:43:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.720 10:43:05 -- common/autotest_common.sh@10 -- # set +x 00:05:16.720 ************************************ 00:05:16.720 START TEST app_repeat 00:05:16.720 ************************************ 00:05:16.720 10:43:05 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:16.720 10:43:05 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.720 10:43:05 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.720 10:43:05 -- event/event.sh@13 -- # local nbd_list 00:05:16.720 10:43:05 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.720 10:43:05 -- event/event.sh@14 -- # local bdev_list 00:05:16.720 10:43:05 -- event/event.sh@15 -- # local repeat_times=4 00:05:16.720 10:43:05 -- event/event.sh@17 -- # modprobe nbd 00:05:16.720 10:43:05 -- event/event.sh@19 -- # repeat_pid=1286812 00:05:16.720 10:43:05 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.720 10:43:05 -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:16.720 10:43:05 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1286812' 00:05:16.720 Process app_repeat pid: 1286812 00:05:16.720 10:43:05 -- event/event.sh@23 -- # for i in {0..2} 00:05:16.720 10:43:05 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:16.720 spdk_app_start Round 0 00:05:16.720 10:43:05 -- event/event.sh@25 -- # waitforlisten 1286812 /var/tmp/spdk-nbd.sock 00:05:16.720 10:43:05 -- common/autotest_common.sh@829 -- # '[' -z 1286812 ']' 00:05:16.720 10:43:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.720 10:43:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.720 10:43:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.720 10:43:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.720 10:43:05 -- common/autotest_common.sh@10 -- # set +x 00:05:16.720 [2024-12-15 10:43:05.539245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:16.720 [2024-12-15 10:43:05.539335] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1286812 ] 00:05:16.720 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.720 [2024-12-15 10:43:05.609015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.720 [2024-12-15 10:43:05.677102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.720 [2024-12-15 10:43:05.677104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.658 10:43:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.658 10:43:06 -- common/autotest_common.sh@862 -- # return 0 00:05:17.658 10:43:06 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.658 Malloc0 00:05:17.658 10:43:06 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.917 Malloc1 00:05:17.917 10:43:06 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.917 10:43:06 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.917 10:43:06 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.917 10:43:06 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.917 10:43:06 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.917 10:43:06 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.917 10:43:06 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.917 10:43:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.917 10:43:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.917 10:43:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.917 10:43:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.917 10:43:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.917 10:43:06 -- bdev/nbd_common.sh@12 -- # local i 00:05:17.917 10:43:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.917 10:43:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.917 10:43:06 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.917 /dev/nbd0 00:05:18.176 10:43:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.176 10:43:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.176 10:43:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:18.176 10:43:06 -- common/autotest_common.sh@867 -- # local i 00:05:18.176 10:43:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:18.176 10:43:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:18.176 10:43:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:18.176 10:43:06 -- common/autotest_common.sh@871 -- # break 00:05:18.176 10:43:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:18.176 10:43:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:18.176 10:43:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.176 1+0 records in 00:05:18.176 1+0 records out 00:05:18.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241904 s, 16.9 MB/s 00:05:18.176 10:43:06 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:18.176 10:43:06 -- common/autotest_common.sh@884 -- # size=4096 00:05:18.176 10:43:06 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:18.176 10:43:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:18.176 10:43:06 -- common/autotest_common.sh@887 -- # return 0 00:05:18.176 10:43:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.176 10:43:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.176 10:43:06 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.176 /dev/nbd1 00:05:18.176 10:43:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.176 10:43:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.176 10:43:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:18.176 10:43:07 -- common/autotest_common.sh@867 -- # local i 00:05:18.176 10:43:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:18.176 10:43:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:18.176 10:43:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:18.176 10:43:07 -- common/autotest_common.sh@871 -- # break 00:05:18.176 10:43:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:18.176 10:43:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:18.176 10:43:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.176 1+0 records in 00:05:18.176 1+0 records out 00:05:18.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248054 s, 16.5 MB/s 00:05:18.176 10:43:07 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:18.176 10:43:07 -- common/autotest_common.sh@884 -- # size=4096 00:05:18.176 10:43:07 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:18.176 10:43:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:18.176 10:43:07 -- common/autotest_common.sh@887 -- # return 0 00:05:18.176 10:43:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.176 10:43:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.435 { 00:05:18.435 "nbd_device": "/dev/nbd0", 00:05:18.435 "bdev_name": "Malloc0" 00:05:18.435 }, 00:05:18.435 { 00:05:18.435 "nbd_device": "/dev/nbd1", 00:05:18.435 "bdev_name": "Malloc1" 00:05:18.435 } 00:05:18.435 ]' 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.435 { 00:05:18.435 "nbd_device": "/dev/nbd0", 00:05:18.435 "bdev_name": "Malloc0" 00:05:18.435 }, 00:05:18.435 { 00:05:18.435 "nbd_device": "/dev/nbd1", 00:05:18.435 "bdev_name": "Malloc1" 00:05:18.435 } 00:05:18.435 ]' 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.435 /dev/nbd1' 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.435 /dev/nbd1' 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.435 256+0 records in 00:05:18.435 256+0 records out 00:05:18.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108167 s, 96.9 MB/s 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.435 256+0 records in 00:05:18.435 256+0 records out 00:05:18.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195696 s, 53.6 MB/s 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.435 10:43:07 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.694 256+0 records in 00:05:18.694 256+0 records out 00:05:18.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208611 s, 50.3 MB/s 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@51 -- # local i 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@41 -- # break 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.694 10:43:07 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.953 10:43:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.953 10:43:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.953 10:43:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.953 10:43:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.953 10:43:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.953 10:43:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.953 10:43:07 -- bdev/nbd_common.sh@41 -- # break 00:05:18.953 10:43:07 -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.953 10:43:07 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.953 10:43:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.953 10:43:07 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.212 10:43:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.212 10:43:08 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.212 10:43:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.212 10:43:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.212 10:43:08 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.212 10:43:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.212 10:43:08 -- bdev/nbd_common.sh@65 -- # true 00:05:19.212 10:43:08 -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.212 10:43:08 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.212 10:43:08 -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.212 10:43:08 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.212 10:43:08 -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.212 10:43:08 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.471 10:43:08 -- event/event.sh@35 -- # sleep 3 00:05:19.729 [2024-12-15 10:43:08.505112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.729 [2024-12-15 10:43:08.572402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.729 [2024-12-15 10:43:08.572403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.729 [2024-12-15 10:43:08.613296] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.729 [2024-12-15 10:43:08.613339] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.016 10:43:11 -- event/event.sh@23 -- # for i in {0..2} 00:05:23.016 10:43:11 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:23.016 spdk_app_start Round 1 00:05:23.016 10:43:11 -- event/event.sh@25 -- # waitforlisten 1286812 /var/tmp/spdk-nbd.sock 00:05:23.016 10:43:11 -- common/autotest_common.sh@829 -- # '[' -z 1286812 ']' 00:05:23.016 10:43:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.016 10:43:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.016 10:43:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.016 10:43:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.016 10:43:11 -- common/autotest_common.sh@10 -- # set +x 00:05:23.016 10:43:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.016 10:43:11 -- common/autotest_common.sh@862 -- # return 0 00:05:23.016 10:43:11 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.016 Malloc0 00:05:23.016 10:43:11 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.016 Malloc1 00:05:23.016 10:43:11 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.016 10:43:11 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.016 10:43:11 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.016 10:43:11 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.016 10:43:11 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.016 10:43:11 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.016 10:43:11 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.016 10:43:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.016 10:43:11 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.016 10:43:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.016 10:43:11 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.016 10:43:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.017 10:43:11 -- bdev/nbd_common.sh@12 -- # local i 00:05:23.017 10:43:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.017 10:43:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.017 10:43:11 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.276 /dev/nbd0 00:05:23.276 10:43:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.276 10:43:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.276 10:43:12 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:23.276 10:43:12 -- common/autotest_common.sh@867 -- # local i 00:05:23.276 10:43:12 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:23.276 10:43:12 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:23.276 10:43:12 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:23.276 10:43:12 -- common/autotest_common.sh@871 -- # break 00:05:23.276 10:43:12 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:23.276 10:43:12 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:23.276 10:43:12 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.276 1+0 records in 00:05:23.276 1+0 records out 00:05:23.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218963 s, 18.7 MB/s 00:05:23.276 10:43:12 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:23.276 10:43:12 -- common/autotest_common.sh@884 -- # size=4096 00:05:23.276 10:43:12 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:23.276 10:43:12 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:23.276 10:43:12 -- common/autotest_common.sh@887 -- # return 0 00:05:23.276 10:43:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.276 10:43:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.276 10:43:12 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.276 /dev/nbd1 00:05:23.276 10:43:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.276 10:43:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.276 10:43:12 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:23.276 10:43:12 -- common/autotest_common.sh@867 -- # local i 00:05:23.276 10:43:12 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:23.276 10:43:12 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:23.276 10:43:12 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:23.276 10:43:12 -- common/autotest_common.sh@871 -- # break 00:05:23.276 10:43:12 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:23.276 10:43:12 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:23.276 10:43:12 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.276 1+0 records in 00:05:23.276 1+0 records out 00:05:23.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236951 s, 17.3 MB/s 00:05:23.276 10:43:12 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:23.535 10:43:12 -- common/autotest_common.sh@884 -- # size=4096 00:05:23.536 10:43:12 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:23.536 10:43:12 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:23.536 10:43:12 -- common/autotest_common.sh@887 -- # return 0 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.536 { 00:05:23.536 "nbd_device": "/dev/nbd0", 00:05:23.536 "bdev_name": "Malloc0" 00:05:23.536 }, 00:05:23.536 { 00:05:23.536 "nbd_device": "/dev/nbd1", 00:05:23.536 "bdev_name": "Malloc1" 00:05:23.536 } 00:05:23.536 ]' 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.536 { 00:05:23.536 "nbd_device": "/dev/nbd0", 00:05:23.536 "bdev_name": "Malloc0" 00:05:23.536 }, 00:05:23.536 { 00:05:23.536 "nbd_device": "/dev/nbd1", 00:05:23.536 "bdev_name": "Malloc1" 00:05:23.536 } 00:05:23.536 ]' 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.536 /dev/nbd1' 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.536 /dev/nbd1' 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.536 256+0 records in 00:05:23.536 256+0 records out 00:05:23.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00351216 s, 299 MB/s 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.536 10:43:12 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.795 256+0 records in 00:05:23.795 256+0 records out 00:05:23.795 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199382 s, 52.6 MB/s 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.795 256+0 records in 00:05:23.795 256+0 records out 00:05:23.795 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209317 s, 50.1 MB/s 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@51 -- # local i 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.795 10:43:12 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.055 10:43:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.055 10:43:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.055 10:43:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.055 10:43:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.055 10:43:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.055 10:43:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.055 10:43:12 -- bdev/nbd_common.sh@41 -- # break 00:05:24.055 10:43:12 -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.055 10:43:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.055 10:43:12 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.055 10:43:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.055 10:43:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.055 10:43:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.055 10:43:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.055 10:43:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.055 10:43:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.055 10:43:13 -- bdev/nbd_common.sh@41 -- # break 00:05:24.055 10:43:13 -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.055 10:43:13 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.055 10:43:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.055 10:43:13 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.314 10:43:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.314 10:43:13 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.314 10:43:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.314 10:43:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.314 10:43:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.314 10:43:13 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.314 10:43:13 -- bdev/nbd_common.sh@65 -- # true 00:05:24.314 10:43:13 -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.314 10:43:13 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.314 10:43:13 -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.314 10:43:13 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.314 10:43:13 -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.314 10:43:13 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.573 10:43:13 -- event/event.sh@35 -- # sleep 3 00:05:24.833 [2024-12-15 10:43:13.613395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.833 [2024-12-15 10:43:13.678818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.833 [2024-12-15 10:43:13.678820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.833 [2024-12-15 10:43:13.719930] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.833 [2024-12-15 10:43:13.719972] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.121 10:43:16 -- event/event.sh@23 -- # for i in {0..2} 00:05:28.121 10:43:16 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:28.121 spdk_app_start Round 2 00:05:28.121 10:43:16 -- event/event.sh@25 -- # waitforlisten 1286812 /var/tmp/spdk-nbd.sock 00:05:28.121 10:43:16 -- common/autotest_common.sh@829 -- # '[' -z 1286812 ']' 00:05:28.121 10:43:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.121 10:43:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.121 10:43:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.121 10:43:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.121 10:43:16 -- common/autotest_common.sh@10 -- # set +x 00:05:28.121 10:43:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.121 10:43:16 -- common/autotest_common.sh@862 -- # return 0 00:05:28.121 10:43:16 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.121 Malloc0 00:05:28.121 10:43:16 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.121 Malloc1 00:05:28.121 10:43:16 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.121 10:43:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.121 10:43:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.121 10:43:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.121 10:43:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.121 10:43:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.121 10:43:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.121 10:43:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.121 10:43:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.121 10:43:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.121 10:43:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.121 10:43:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.121 10:43:16 -- bdev/nbd_common.sh@12 -- # local i 00:05:28.121 10:43:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.121 10:43:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.121 10:43:16 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.380 /dev/nbd0 00:05:28.380 10:43:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.380 10:43:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.381 10:43:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:28.381 10:43:17 -- common/autotest_common.sh@867 -- # local i 00:05:28.381 10:43:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:28.381 10:43:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:28.381 10:43:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:28.381 10:43:17 -- common/autotest_common.sh@871 -- # break 00:05:28.381 10:43:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:28.381 10:43:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:28.381 10:43:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.381 1+0 records in 00:05:28.381 1+0 records out 00:05:28.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240254 s, 17.0 MB/s 00:05:28.381 10:43:17 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:28.381 10:43:17 -- common/autotest_common.sh@884 -- # size=4096 00:05:28.381 10:43:17 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:28.381 10:43:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:28.381 10:43:17 -- common/autotest_common.sh@887 -- # return 0 00:05:28.381 10:43:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.381 10:43:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.381 10:43:17 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.381 /dev/nbd1 00:05:28.381 10:43:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.381 10:43:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.381 10:43:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:28.381 10:43:17 -- common/autotest_common.sh@867 -- # local i 00:05:28.381 10:43:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:28.381 10:43:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:28.381 10:43:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:28.381 10:43:17 -- common/autotest_common.sh@871 -- # break 00:05:28.381 10:43:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:28.381 10:43:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:28.381 10:43:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.640 1+0 records in 00:05:28.640 1+0 records out 00:05:28.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248054 s, 16.5 MB/s 00:05:28.640 10:43:17 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:28.640 10:43:17 -- common/autotest_common.sh@884 -- # size=4096 00:05:28.640 10:43:17 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:28.640 10:43:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:28.640 10:43:17 -- common/autotest_common.sh@887 -- # return 0 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.640 { 00:05:28.640 "nbd_device": "/dev/nbd0", 00:05:28.640 "bdev_name": "Malloc0" 00:05:28.640 }, 00:05:28.640 { 00:05:28.640 "nbd_device": "/dev/nbd1", 00:05:28.640 "bdev_name": "Malloc1" 00:05:28.640 } 00:05:28.640 ]' 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.640 { 00:05:28.640 "nbd_device": "/dev/nbd0", 00:05:28.640 "bdev_name": "Malloc0" 00:05:28.640 }, 00:05:28.640 { 00:05:28.640 "nbd_device": "/dev/nbd1", 00:05:28.640 "bdev_name": "Malloc1" 00:05:28.640 } 00:05:28.640 ]' 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.640 /dev/nbd1' 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.640 /dev/nbd1' 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.640 10:43:17 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.899 256+0 records in 00:05:28.899 256+0 records out 00:05:28.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116747 s, 89.8 MB/s 00:05:28.899 10:43:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.899 10:43:17 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.899 256+0 records in 00:05:28.899 256+0 records out 00:05:28.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195178 s, 53.7 MB/s 00:05:28.899 10:43:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.899 10:43:17 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.899 256+0 records in 00:05:28.899 256+0 records out 00:05:28.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211271 s, 49.6 MB/s 00:05:28.899 10:43:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@51 -- # local i 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.900 10:43:17 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.159 10:43:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.159 10:43:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.159 10:43:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.159 10:43:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.159 10:43:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.159 10:43:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.159 10:43:17 -- bdev/nbd_common.sh@41 -- # break 00:05:29.159 10:43:17 -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.159 10:43:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.159 10:43:17 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.159 10:43:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.159 10:43:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.159 10:43:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.159 10:43:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.159 10:43:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.159 10:43:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.159 10:43:18 -- bdev/nbd_common.sh@41 -- # break 00:05:29.159 10:43:18 -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.159 10:43:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.159 10:43:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.159 10:43:18 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.418 10:43:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.418 10:43:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.418 10:43:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.418 10:43:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.418 10:43:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.418 10:43:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.418 10:43:18 -- bdev/nbd_common.sh@65 -- # true 00:05:29.418 10:43:18 -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.418 10:43:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.418 10:43:18 -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.418 10:43:18 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.418 10:43:18 -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.418 10:43:18 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.677 10:43:18 -- event/event.sh@35 -- # sleep 3 00:05:29.936 [2024-12-15 10:43:18.734140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.936 [2024-12-15 10:43:18.797424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.936 [2024-12-15 10:43:18.797426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.936 [2024-12-15 10:43:18.837435] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.936 [2024-12-15 10:43:18.837478] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.226 10:43:21 -- event/event.sh@38 -- # waitforlisten 1286812 /var/tmp/spdk-nbd.sock 00:05:33.226 10:43:21 -- common/autotest_common.sh@829 -- # '[' -z 1286812 ']' 00:05:33.226 10:43:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.226 10:43:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.226 10:43:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.226 10:43:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.226 10:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:33.226 10:43:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.226 10:43:21 -- common/autotest_common.sh@862 -- # return 0 00:05:33.226 10:43:21 -- event/event.sh@39 -- # killprocess 1286812 00:05:33.226 10:43:21 -- common/autotest_common.sh@936 -- # '[' -z 1286812 ']' 00:05:33.226 10:43:21 -- common/autotest_common.sh@940 -- # kill -0 1286812 00:05:33.226 10:43:21 -- common/autotest_common.sh@941 -- # uname 00:05:33.226 10:43:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.226 10:43:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1286812 00:05:33.226 10:43:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:33.226 10:43:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:33.226 10:43:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1286812' 00:05:33.226 killing process with pid 1286812 00:05:33.226 10:43:21 -- common/autotest_common.sh@955 -- # kill 1286812 00:05:33.226 10:43:21 -- common/autotest_common.sh@960 -- # wait 1286812 00:05:33.226 spdk_app_start is called in Round 0. 00:05:33.226 Shutdown signal received, stop current app iteration 00:05:33.226 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:33.226 spdk_app_start is called in Round 1. 00:05:33.226 Shutdown signal received, stop current app iteration 00:05:33.226 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:33.226 spdk_app_start is called in Round 2. 00:05:33.226 Shutdown signal received, stop current app iteration 00:05:33.226 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:33.226 spdk_app_start is called in Round 3. 00:05:33.226 Shutdown signal received, stop current app iteration 00:05:33.226 10:43:21 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:33.226 10:43:21 -- event/event.sh@42 -- # return 0 00:05:33.226 00:05:33.226 real 0m16.447s 00:05:33.226 user 0m35.053s 00:05:33.226 sys 0m3.084s 00:05:33.226 10:43:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.226 10:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:33.226 ************************************ 00:05:33.226 END TEST app_repeat 00:05:33.226 ************************************ 00:05:33.226 10:43:22 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:33.226 10:43:22 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:33.226 10:43:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.226 10:43:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.226 10:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:33.226 ************************************ 00:05:33.226 START TEST cpu_locks 00:05:33.226 ************************************ 00:05:33.226 10:43:22 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:33.226 * Looking for test storage... 00:05:33.226 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:33.226 10:43:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:33.226 10:43:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:33.226 10:43:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:33.226 10:43:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:33.226 10:43:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:33.226 10:43:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:33.226 10:43:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:33.226 10:43:22 -- scripts/common.sh@335 -- # IFS=.-: 00:05:33.226 10:43:22 -- scripts/common.sh@335 -- # read -ra ver1 00:05:33.226 10:43:22 -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.226 10:43:22 -- scripts/common.sh@336 -- # read -ra ver2 00:05:33.226 10:43:22 -- scripts/common.sh@337 -- # local 'op=<' 00:05:33.226 10:43:22 -- scripts/common.sh@339 -- # ver1_l=2 00:05:33.226 10:43:22 -- scripts/common.sh@340 -- # ver2_l=1 00:05:33.226 10:43:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:33.226 10:43:22 -- scripts/common.sh@343 -- # case "$op" in 00:05:33.226 10:43:22 -- scripts/common.sh@344 -- # : 1 00:05:33.226 10:43:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:33.226 10:43:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.226 10:43:22 -- scripts/common.sh@364 -- # decimal 1 00:05:33.226 10:43:22 -- scripts/common.sh@352 -- # local d=1 00:05:33.226 10:43:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.226 10:43:22 -- scripts/common.sh@354 -- # echo 1 00:05:33.226 10:43:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:33.226 10:43:22 -- scripts/common.sh@365 -- # decimal 2 00:05:33.226 10:43:22 -- scripts/common.sh@352 -- # local d=2 00:05:33.226 10:43:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.226 10:43:22 -- scripts/common.sh@354 -- # echo 2 00:05:33.226 10:43:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:33.226 10:43:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:33.226 10:43:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:33.226 10:43:22 -- scripts/common.sh@367 -- # return 0 00:05:33.226 10:43:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.227 10:43:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:33.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.227 --rc genhtml_branch_coverage=1 00:05:33.227 --rc genhtml_function_coverage=1 00:05:33.227 --rc genhtml_legend=1 00:05:33.227 --rc geninfo_all_blocks=1 00:05:33.227 --rc geninfo_unexecuted_blocks=1 00:05:33.227 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.227 ' 00:05:33.227 10:43:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:33.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.227 --rc genhtml_branch_coverage=1 00:05:33.227 --rc genhtml_function_coverage=1 00:05:33.227 --rc genhtml_legend=1 00:05:33.227 --rc geninfo_all_blocks=1 00:05:33.227 --rc geninfo_unexecuted_blocks=1 00:05:33.227 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.227 ' 00:05:33.227 10:43:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:33.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.227 --rc genhtml_branch_coverage=1 00:05:33.227 --rc genhtml_function_coverage=1 00:05:33.227 --rc genhtml_legend=1 00:05:33.227 --rc geninfo_all_blocks=1 00:05:33.227 --rc geninfo_unexecuted_blocks=1 00:05:33.227 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.227 ' 00:05:33.227 10:43:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:33.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.227 --rc genhtml_branch_coverage=1 00:05:33.227 --rc genhtml_function_coverage=1 00:05:33.227 --rc genhtml_legend=1 00:05:33.227 --rc geninfo_all_blocks=1 00:05:33.227 --rc geninfo_unexecuted_blocks=1 00:05:33.227 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.227 ' 00:05:33.227 10:43:22 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:33.227 10:43:22 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:33.227 10:43:22 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:33.227 10:43:22 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:33.227 10:43:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.227 10:43:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.227 10:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:33.227 ************************************ 00:05:33.227 START TEST default_locks 00:05:33.227 ************************************ 00:05:33.227 10:43:22 -- common/autotest_common.sh@1114 -- # default_locks 00:05:33.227 10:43:22 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1290010 00:05:33.227 10:43:22 -- event/cpu_locks.sh@47 -- # waitforlisten 1290010 00:05:33.227 10:43:22 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.227 10:43:22 -- common/autotest_common.sh@829 -- # '[' -z 1290010 ']' 00:05:33.227 10:43:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.227 10:43:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.227 10:43:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.227 10:43:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.227 10:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:33.227 [2024-12-15 10:43:22.234636] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:33.227 [2024-12-15 10:43:22.234723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290010 ] 00:05:33.486 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.486 [2024-12-15 10:43:22.302205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.486 [2024-12-15 10:43:22.372206] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.486 [2024-12-15 10:43:22.372319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.055 10:43:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.055 10:43:23 -- common/autotest_common.sh@862 -- # return 0 00:05:34.055 10:43:23 -- event/cpu_locks.sh@49 -- # locks_exist 1290010 00:05:34.055 10:43:23 -- event/cpu_locks.sh@22 -- # lslocks -p 1290010 00:05:34.055 10:43:23 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.040 lslocks: write error 00:05:35.040 10:43:23 -- event/cpu_locks.sh@50 -- # killprocess 1290010 00:05:35.040 10:43:23 -- common/autotest_common.sh@936 -- # '[' -z 1290010 ']' 00:05:35.040 10:43:23 -- common/autotest_common.sh@940 -- # kill -0 1290010 00:05:35.040 10:43:23 -- common/autotest_common.sh@941 -- # uname 00:05:35.040 10:43:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:35.040 10:43:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1290010 00:05:35.040 10:43:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:35.040 10:43:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:35.040 10:43:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1290010' 00:05:35.040 killing process with pid 1290010 00:05:35.040 10:43:23 -- common/autotest_common.sh@955 -- # kill 1290010 00:05:35.040 10:43:23 -- common/autotest_common.sh@960 -- # wait 1290010 00:05:35.300 10:43:24 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1290010 00:05:35.300 10:43:24 -- common/autotest_common.sh@650 -- # local es=0 00:05:35.300 10:43:24 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1290010 00:05:35.300 10:43:24 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:35.300 10:43:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.300 10:43:24 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:35.300 10:43:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.300 10:43:24 -- common/autotest_common.sh@653 -- # waitforlisten 1290010 00:05:35.300 10:43:24 -- common/autotest_common.sh@829 -- # '[' -z 1290010 ']' 00:05:35.300 10:43:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.300 10:43:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.300 10:43:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.300 10:43:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.300 10:43:24 -- common/autotest_common.sh@10 -- # set +x 00:05:35.300 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1290010) - No such process 00:05:35.300 ERROR: process (pid: 1290010) is no longer running 00:05:35.300 10:43:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.300 10:43:24 -- common/autotest_common.sh@862 -- # return 1 00:05:35.300 10:43:24 -- common/autotest_common.sh@653 -- # es=1 00:05:35.300 10:43:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.300 10:43:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:35.300 10:43:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.300 10:43:24 -- event/cpu_locks.sh@54 -- # no_locks 00:05:35.300 10:43:24 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:35.300 10:43:24 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:35.300 10:43:24 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:35.300 00:05:35.300 real 0m1.859s 00:05:35.300 user 0m1.973s 00:05:35.300 sys 0m0.674s 00:05:35.300 10:43:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.300 10:43:24 -- common/autotest_common.sh@10 -- # set +x 00:05:35.300 ************************************ 00:05:35.300 END TEST default_locks 00:05:35.300 ************************************ 00:05:35.300 10:43:24 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:35.300 10:43:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.300 10:43:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.300 10:43:24 -- common/autotest_common.sh@10 -- # set +x 00:05:35.300 ************************************ 00:05:35.300 START TEST default_locks_via_rpc 00:05:35.300 ************************************ 00:05:35.300 10:43:24 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:35.300 10:43:24 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1290323 00:05:35.300 10:43:24 -- event/cpu_locks.sh@63 -- # waitforlisten 1290323 00:05:35.300 10:43:24 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.300 10:43:24 -- common/autotest_common.sh@829 -- # '[' -z 1290323 ']' 00:05:35.300 10:43:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.300 10:43:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.300 10:43:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.300 10:43:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.300 10:43:24 -- common/autotest_common.sh@10 -- # set +x 00:05:35.300 [2024-12-15 10:43:24.142665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:35.300 [2024-12-15 10:43:24.142734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290323 ] 00:05:35.300 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.300 [2024-12-15 10:43:24.210523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.300 [2024-12-15 10:43:24.280657] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.300 [2024-12-15 10:43:24.280769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.237 10:43:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.237 10:43:24 -- common/autotest_common.sh@862 -- # return 0 00:05:36.237 10:43:24 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:36.237 10:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.237 10:43:24 -- common/autotest_common.sh@10 -- # set +x 00:05:36.237 10:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.237 10:43:24 -- event/cpu_locks.sh@67 -- # no_locks 00:05:36.237 10:43:24 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:36.237 10:43:24 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:36.237 10:43:24 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:36.237 10:43:24 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:36.237 10:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.237 10:43:24 -- common/autotest_common.sh@10 -- # set +x 00:05:36.237 10:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.237 10:43:24 -- event/cpu_locks.sh@71 -- # locks_exist 1290323 00:05:36.237 10:43:24 -- event/cpu_locks.sh@22 -- # lslocks -p 1290323 00:05:36.237 10:43:24 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.804 10:43:25 -- event/cpu_locks.sh@73 -- # killprocess 1290323 00:05:36.804 10:43:25 -- common/autotest_common.sh@936 -- # '[' -z 1290323 ']' 00:05:36.804 10:43:25 -- common/autotest_common.sh@940 -- # kill -0 1290323 00:05:36.804 10:43:25 -- common/autotest_common.sh@941 -- # uname 00:05:36.804 10:43:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:36.804 10:43:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1290323 00:05:36.804 10:43:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:36.804 10:43:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:36.804 10:43:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1290323' 00:05:36.804 killing process with pid 1290323 00:05:36.804 10:43:25 -- common/autotest_common.sh@955 -- # kill 1290323 00:05:36.804 10:43:25 -- common/autotest_common.sh@960 -- # wait 1290323 00:05:37.063 00:05:37.063 real 0m1.791s 00:05:37.063 user 0m1.904s 00:05:37.063 sys 0m0.621s 00:05:37.063 10:43:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.063 10:43:25 -- common/autotest_common.sh@10 -- # set +x 00:05:37.063 ************************************ 00:05:37.063 END TEST default_locks_via_rpc 00:05:37.063 ************************************ 00:05:37.063 10:43:25 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:37.063 10:43:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.063 10:43:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.063 10:43:25 -- common/autotest_common.sh@10 -- # set +x 00:05:37.063 ************************************ 00:05:37.063 START TEST non_locking_app_on_locked_coremask 00:05:37.063 ************************************ 00:05:37.063 10:43:25 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:37.063 10:43:25 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1290636 00:05:37.063 10:43:25 -- event/cpu_locks.sh@81 -- # waitforlisten 1290636 /var/tmp/spdk.sock 00:05:37.063 10:43:25 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.063 10:43:25 -- common/autotest_common.sh@829 -- # '[' -z 1290636 ']' 00:05:37.063 10:43:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.063 10:43:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.063 10:43:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.063 10:43:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.063 10:43:25 -- common/autotest_common.sh@10 -- # set +x 00:05:37.063 [2024-12-15 10:43:25.984403] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:37.063 [2024-12-15 10:43:25.984476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290636 ] 00:05:37.063 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.063 [2024-12-15 10:43:26.051579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.322 [2024-12-15 10:43:26.121920] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:37.322 [2024-12-15 10:43:26.122033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.890 10:43:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.891 10:43:26 -- common/autotest_common.sh@862 -- # return 0 00:05:37.891 10:43:26 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:37.891 10:43:26 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1290887 00:05:37.891 10:43:26 -- event/cpu_locks.sh@85 -- # waitforlisten 1290887 /var/tmp/spdk2.sock 00:05:37.891 10:43:26 -- common/autotest_common.sh@829 -- # '[' -z 1290887 ']' 00:05:37.891 10:43:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.891 10:43:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.891 10:43:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.891 10:43:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.891 10:43:26 -- common/autotest_common.sh@10 -- # set +x 00:05:37.891 [2024-12-15 10:43:26.822665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:37.891 [2024-12-15 10:43:26.822709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290887 ] 00:05:37.891 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.150 [2024-12-15 10:43:26.908327] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.150 [2024-12-15 10:43:26.908356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.150 [2024-12-15 10:43:27.049259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.150 [2024-12-15 10:43:27.049372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.718 10:43:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.718 10:43:27 -- common/autotest_common.sh@862 -- # return 0 00:05:38.718 10:43:27 -- event/cpu_locks.sh@87 -- # locks_exist 1290636 00:05:38.718 10:43:27 -- event/cpu_locks.sh@22 -- # lslocks -p 1290636 00:05:38.718 10:43:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.655 lslocks: write error 00:05:39.655 10:43:28 -- event/cpu_locks.sh@89 -- # killprocess 1290636 00:05:39.655 10:43:28 -- common/autotest_common.sh@936 -- # '[' -z 1290636 ']' 00:05:39.655 10:43:28 -- common/autotest_common.sh@940 -- # kill -0 1290636 00:05:39.915 10:43:28 -- common/autotest_common.sh@941 -- # uname 00:05:39.915 10:43:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:39.915 10:43:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1290636 00:05:39.915 10:43:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:39.915 10:43:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:39.915 10:43:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1290636' 00:05:39.915 killing process with pid 1290636 00:05:39.915 10:43:28 -- common/autotest_common.sh@955 -- # kill 1290636 00:05:39.915 10:43:28 -- common/autotest_common.sh@960 -- # wait 1290636 00:05:40.483 10:43:29 -- event/cpu_locks.sh@90 -- # killprocess 1290887 00:05:40.483 10:43:29 -- common/autotest_common.sh@936 -- # '[' -z 1290887 ']' 00:05:40.483 10:43:29 -- common/autotest_common.sh@940 -- # kill -0 1290887 00:05:40.483 10:43:29 -- common/autotest_common.sh@941 -- # uname 00:05:40.483 10:43:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.483 10:43:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1290887 00:05:40.483 10:43:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:40.483 10:43:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:40.483 10:43:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1290887' 00:05:40.483 killing process with pid 1290887 00:05:40.483 10:43:29 -- common/autotest_common.sh@955 -- # kill 1290887 00:05:40.483 10:43:29 -- common/autotest_common.sh@960 -- # wait 1290887 00:05:40.743 00:05:40.743 real 0m3.746s 00:05:40.743 user 0m4.027s 00:05:40.743 sys 0m1.224s 00:05:40.743 10:43:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.743 10:43:29 -- common/autotest_common.sh@10 -- # set +x 00:05:40.743 ************************************ 00:05:40.743 END TEST non_locking_app_on_locked_coremask 00:05:40.743 ************************************ 00:05:40.743 10:43:29 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:40.743 10:43:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.743 10:43:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.743 10:43:29 -- common/autotest_common.sh@10 -- # set +x 00:05:40.743 ************************************ 00:05:40.743 START TEST locking_app_on_unlocked_coremask 00:05:40.743 ************************************ 00:05:40.743 10:43:29 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:40.743 10:43:29 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1291457 00:05:41.003 10:43:29 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:41.003 10:43:29 -- event/cpu_locks.sh@99 -- # waitforlisten 1291457 /var/tmp/spdk.sock 00:05:41.003 10:43:29 -- common/autotest_common.sh@829 -- # '[' -z 1291457 ']' 00:05:41.003 10:43:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.003 10:43:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.003 10:43:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.003 10:43:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.003 10:43:29 -- common/autotest_common.sh@10 -- # set +x 00:05:41.003 [2024-12-15 10:43:29.781227] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.003 [2024-12-15 10:43:29.781316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291457 ] 00:05:41.003 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.003 [2024-12-15 10:43:29.849901] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.003 [2024-12-15 10:43:29.849926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.003 [2024-12-15 10:43:29.923727] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.003 [2024-12-15 10:43:29.923839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.941 10:43:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.941 10:43:30 -- common/autotest_common.sh@862 -- # return 0 00:05:41.941 10:43:30 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:41.941 10:43:30 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1291522 00:05:41.941 10:43:30 -- event/cpu_locks.sh@103 -- # waitforlisten 1291522 /var/tmp/spdk2.sock 00:05:41.941 10:43:30 -- common/autotest_common.sh@829 -- # '[' -z 1291522 ']' 00:05:41.941 10:43:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.941 10:43:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.941 10:43:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.942 10:43:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.942 10:43:30 -- common/autotest_common.sh@10 -- # set +x 00:05:41.942 [2024-12-15 10:43:30.623613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.942 [2024-12-15 10:43:30.623660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291522 ] 00:05:41.942 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.942 [2024-12-15 10:43:30.715202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.942 [2024-12-15 10:43:30.861609] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.942 [2024-12-15 10:43:30.861722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.510 10:43:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.510 10:43:31 -- common/autotest_common.sh@862 -- # return 0 00:05:42.510 10:43:31 -- event/cpu_locks.sh@105 -- # locks_exist 1291522 00:05:42.510 10:43:31 -- event/cpu_locks.sh@22 -- # lslocks -p 1291522 00:05:42.510 10:43:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.079 lslocks: write error 00:05:43.079 10:43:32 -- event/cpu_locks.sh@107 -- # killprocess 1291457 00:05:43.079 10:43:32 -- common/autotest_common.sh@936 -- # '[' -z 1291457 ']' 00:05:43.080 10:43:32 -- common/autotest_common.sh@940 -- # kill -0 1291457 00:05:43.080 10:43:32 -- common/autotest_common.sh@941 -- # uname 00:05:43.080 10:43:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:43.080 10:43:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1291457 00:05:43.339 10:43:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:43.339 10:43:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:43.339 10:43:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1291457' 00:05:43.339 killing process with pid 1291457 00:05:43.339 10:43:32 -- common/autotest_common.sh@955 -- # kill 1291457 00:05:43.339 10:43:32 -- common/autotest_common.sh@960 -- # wait 1291457 00:05:43.908 10:43:32 -- event/cpu_locks.sh@108 -- # killprocess 1291522 00:05:43.908 10:43:32 -- common/autotest_common.sh@936 -- # '[' -z 1291522 ']' 00:05:43.908 10:43:32 -- common/autotest_common.sh@940 -- # kill -0 1291522 00:05:43.908 10:43:32 -- common/autotest_common.sh@941 -- # uname 00:05:43.908 10:43:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:43.908 10:43:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1291522 00:05:43.908 10:43:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:43.908 10:43:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:43.908 10:43:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1291522' 00:05:43.908 killing process with pid 1291522 00:05:43.908 10:43:32 -- common/autotest_common.sh@955 -- # kill 1291522 00:05:43.908 10:43:32 -- common/autotest_common.sh@960 -- # wait 1291522 00:05:44.167 00:05:44.167 real 0m3.318s 00:05:44.167 user 0m3.561s 00:05:44.167 sys 0m1.010s 00:05:44.167 10:43:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.167 10:43:33 -- common/autotest_common.sh@10 -- # set +x 00:05:44.167 ************************************ 00:05:44.167 END TEST locking_app_on_unlocked_coremask 00:05:44.167 ************************************ 00:05:44.167 10:43:33 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:44.167 10:43:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.167 10:43:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.167 10:43:33 -- common/autotest_common.sh@10 -- # set +x 00:05:44.167 ************************************ 00:05:44.167 START TEST locking_app_on_locked_coremask 00:05:44.167 ************************************ 00:05:44.167 10:43:33 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:44.167 10:43:33 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1292038 00:05:44.167 10:43:33 -- event/cpu_locks.sh@116 -- # waitforlisten 1292038 /var/tmp/spdk.sock 00:05:44.167 10:43:33 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.167 10:43:33 -- common/autotest_common.sh@829 -- # '[' -z 1292038 ']' 00:05:44.167 10:43:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.167 10:43:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.167 10:43:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.167 10:43:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.167 10:43:33 -- common/autotest_common.sh@10 -- # set +x 00:05:44.167 [2024-12-15 10:43:33.149382] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.167 [2024-12-15 10:43:33.149461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292038 ] 00:05:44.426 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.426 [2024-12-15 10:43:33.218430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.426 [2024-12-15 10:43:33.288051] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.426 [2024-12-15 10:43:33.288167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.994 10:43:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.994 10:43:33 -- common/autotest_common.sh@862 -- # return 0 00:05:44.994 10:43:33 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1292254 00:05:44.994 10:43:33 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1292254 /var/tmp/spdk2.sock 00:05:44.994 10:43:33 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:44.994 10:43:33 -- common/autotest_common.sh@650 -- # local es=0 00:05:44.994 10:43:33 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1292254 /var/tmp/spdk2.sock 00:05:44.994 10:43:33 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:44.994 10:43:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.994 10:43:33 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:44.994 10:43:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.994 10:43:33 -- common/autotest_common.sh@653 -- # waitforlisten 1292254 /var/tmp/spdk2.sock 00:05:44.994 10:43:33 -- common/autotest_common.sh@829 -- # '[' -z 1292254 ']' 00:05:44.994 10:43:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.994 10:43:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.994 10:43:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.994 10:43:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.994 10:43:33 -- common/autotest_common.sh@10 -- # set +x 00:05:44.994 [2024-12-15 10:43:34.007911] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.994 [2024-12-15 10:43:34.007995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292254 ] 00:05:45.254 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.254 [2024-12-15 10:43:34.099882] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1292038 has claimed it. 00:05:45.254 [2024-12-15 10:43:34.099922] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:45.822 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1292254) - No such process 00:05:45.822 ERROR: process (pid: 1292254) is no longer running 00:05:45.822 10:43:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.822 10:43:34 -- common/autotest_common.sh@862 -- # return 1 00:05:45.822 10:43:34 -- common/autotest_common.sh@653 -- # es=1 00:05:45.822 10:43:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.822 10:43:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.822 10:43:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.822 10:43:34 -- event/cpu_locks.sh@122 -- # locks_exist 1292038 00:05:45.822 10:43:34 -- event/cpu_locks.sh@22 -- # lslocks -p 1292038 00:05:45.822 10:43:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.081 lslocks: write error 00:05:46.081 10:43:35 -- event/cpu_locks.sh@124 -- # killprocess 1292038 00:05:46.081 10:43:35 -- common/autotest_common.sh@936 -- # '[' -z 1292038 ']' 00:05:46.081 10:43:35 -- common/autotest_common.sh@940 -- # kill -0 1292038 00:05:46.081 10:43:35 -- common/autotest_common.sh@941 -- # uname 00:05:46.081 10:43:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.081 10:43:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1292038 00:05:46.340 10:43:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.340 10:43:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.340 10:43:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1292038' 00:05:46.340 killing process with pid 1292038 00:05:46.340 10:43:35 -- common/autotest_common.sh@955 -- # kill 1292038 00:05:46.340 10:43:35 -- common/autotest_common.sh@960 -- # wait 1292038 00:05:46.599 00:05:46.599 real 0m2.302s 00:05:46.599 user 0m2.516s 00:05:46.599 sys 0m0.676s 00:05:46.599 10:43:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.599 10:43:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.599 ************************************ 00:05:46.599 END TEST locking_app_on_locked_coremask 00:05:46.599 ************************************ 00:05:46.599 10:43:35 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:46.599 10:43:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.599 10:43:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.599 10:43:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.599 ************************************ 00:05:46.599 START TEST locking_overlapped_coremask 00:05:46.599 ************************************ 00:05:46.599 10:43:35 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:46.599 10:43:35 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1292538 00:05:46.599 10:43:35 -- event/cpu_locks.sh@133 -- # waitforlisten 1292538 /var/tmp/spdk.sock 00:05:46.599 10:43:35 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:46.599 10:43:35 -- common/autotest_common.sh@829 -- # '[' -z 1292538 ']' 00:05:46.599 10:43:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.599 10:43:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.599 10:43:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.599 10:43:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.599 10:43:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.599 [2024-12-15 10:43:35.498594] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.599 [2024-12-15 10:43:35.498660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292538 ] 00:05:46.599 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.599 [2024-12-15 10:43:35.566342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.858 [2024-12-15 10:43:35.637525] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.858 [2024-12-15 10:43:35.637663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.858 [2024-12-15 10:43:35.637760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.858 [2024-12-15 10:43:35.637762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.427 10:43:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.427 10:43:36 -- common/autotest_common.sh@862 -- # return 0 00:05:47.427 10:43:36 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:47.427 10:43:36 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1292624 00:05:47.427 10:43:36 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1292624 /var/tmp/spdk2.sock 00:05:47.427 10:43:36 -- common/autotest_common.sh@650 -- # local es=0 00:05:47.427 10:43:36 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1292624 /var/tmp/spdk2.sock 00:05:47.427 10:43:36 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:47.427 10:43:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.427 10:43:36 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:47.427 10:43:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.427 10:43:36 -- common/autotest_common.sh@653 -- # waitforlisten 1292624 /var/tmp/spdk2.sock 00:05:47.427 10:43:36 -- common/autotest_common.sh@829 -- # '[' -z 1292624 ']' 00:05:47.427 10:43:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.427 10:43:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.427 10:43:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.427 10:43:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.427 10:43:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.427 [2024-12-15 10:43:36.339525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.427 [2024-12-15 10:43:36.339571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292624 ] 00:05:47.427 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.427 [2024-12-15 10:43:36.430243] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1292538 has claimed it. 00:05:47.427 [2024-12-15 10:43:36.430283] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:47.995 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1292624) - No such process 00:05:47.995 ERROR: process (pid: 1292624) is no longer running 00:05:47.995 10:43:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.995 10:43:36 -- common/autotest_common.sh@862 -- # return 1 00:05:47.995 10:43:36 -- common/autotest_common.sh@653 -- # es=1 00:05:47.995 10:43:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.995 10:43:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.995 10:43:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.995 10:43:36 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:47.995 10:43:36 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:47.995 10:43:36 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:47.995 10:43:36 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:47.995 10:43:36 -- event/cpu_locks.sh@141 -- # killprocess 1292538 00:05:47.995 10:43:36 -- common/autotest_common.sh@936 -- # '[' -z 1292538 ']' 00:05:47.995 10:43:36 -- common/autotest_common.sh@940 -- # kill -0 1292538 00:05:47.995 10:43:36 -- common/autotest_common.sh@941 -- # uname 00:05:47.995 10:43:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:47.995 10:43:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1292538 00:05:48.254 10:43:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.254 10:43:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.254 10:43:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1292538' 00:05:48.254 killing process with pid 1292538 00:05:48.254 10:43:37 -- common/autotest_common.sh@955 -- # kill 1292538 00:05:48.254 10:43:37 -- common/autotest_common.sh@960 -- # wait 1292538 00:05:48.514 00:05:48.514 real 0m1.896s 00:05:48.514 user 0m5.382s 00:05:48.514 sys 0m0.429s 00:05:48.514 10:43:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.514 10:43:37 -- common/autotest_common.sh@10 -- # set +x 00:05:48.514 ************************************ 00:05:48.514 END TEST locking_overlapped_coremask 00:05:48.514 ************************************ 00:05:48.514 10:43:37 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:48.514 10:43:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.514 10:43:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.514 10:43:37 -- common/autotest_common.sh@10 -- # set +x 00:05:48.514 ************************************ 00:05:48.514 START TEST locking_overlapped_coremask_via_rpc 00:05:48.514 ************************************ 00:05:48.514 10:43:37 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:48.514 10:43:37 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1292920 00:05:48.514 10:43:37 -- event/cpu_locks.sh@149 -- # waitforlisten 1292920 /var/tmp/spdk.sock 00:05:48.514 10:43:37 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:48.514 10:43:37 -- common/autotest_common.sh@829 -- # '[' -z 1292920 ']' 00:05:48.514 10:43:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.514 10:43:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.514 10:43:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.514 10:43:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.514 10:43:37 -- common/autotest_common.sh@10 -- # set +x 00:05:48.515 [2024-12-15 10:43:37.446931] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:48.515 [2024-12-15 10:43:37.447020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292920 ] 00:05:48.515 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.515 [2024-12-15 10:43:37.515005] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.515 [2024-12-15 10:43:37.515029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:48.774 [2024-12-15 10:43:37.592357] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:48.774 [2024-12-15 10:43:37.592504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.774 [2024-12-15 10:43:37.592599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.774 [2024-12-15 10:43:37.592601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.343 10:43:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.343 10:43:38 -- common/autotest_common.sh@862 -- # return 0 00:05:49.343 10:43:38 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1292980 00:05:49.343 10:43:38 -- event/cpu_locks.sh@153 -- # waitforlisten 1292980 /var/tmp/spdk2.sock 00:05:49.343 10:43:38 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:49.343 10:43:38 -- common/autotest_common.sh@829 -- # '[' -z 1292980 ']' 00:05:49.343 10:43:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.343 10:43:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.343 10:43:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.343 10:43:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.343 10:43:38 -- common/autotest_common.sh@10 -- # set +x 00:05:49.343 [2024-12-15 10:43:38.315576] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.343 [2024-12-15 10:43:38.315659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292980 ] 00:05:49.343 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.603 [2024-12-15 10:43:38.412491] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.603 [2024-12-15 10:43:38.412520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.603 [2024-12-15 10:43:38.563863] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:49.603 [2024-12-15 10:43:38.564016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.603 [2024-12-15 10:43:38.564136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.603 [2024-12-15 10:43:38.564138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:50.171 10:43:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.171 10:43:39 -- common/autotest_common.sh@862 -- # return 0 00:05:50.171 10:43:39 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:50.171 10:43:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.171 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:05:50.171 10:43:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.171 10:43:39 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:50.171 10:43:39 -- common/autotest_common.sh@650 -- # local es=0 00:05:50.171 10:43:39 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:50.171 10:43:39 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:50.171 10:43:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.171 10:43:39 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:50.171 10:43:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.171 10:43:39 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:50.171 10:43:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.171 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:05:50.171 [2024-12-15 10:43:39.180484] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1292920 has claimed it. 00:05:50.430 request: 00:05:50.430 { 00:05:50.430 "method": "framework_enable_cpumask_locks", 00:05:50.430 "req_id": 1 00:05:50.430 } 00:05:50.430 Got JSON-RPC error response 00:05:50.430 response: 00:05:50.430 { 00:05:50.430 "code": -32603, 00:05:50.430 "message": "Failed to claim CPU core: 2" 00:05:50.430 } 00:05:50.430 10:43:39 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:50.430 10:43:39 -- common/autotest_common.sh@653 -- # es=1 00:05:50.430 10:43:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:50.430 10:43:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:50.430 10:43:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:50.430 10:43:39 -- event/cpu_locks.sh@158 -- # waitforlisten 1292920 /var/tmp/spdk.sock 00:05:50.430 10:43:39 -- common/autotest_common.sh@829 -- # '[' -z 1292920 ']' 00:05:50.430 10:43:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.430 10:43:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.430 10:43:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.430 10:43:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.430 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:05:50.430 10:43:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.430 10:43:39 -- common/autotest_common.sh@862 -- # return 0 00:05:50.430 10:43:39 -- event/cpu_locks.sh@159 -- # waitforlisten 1292980 /var/tmp/spdk2.sock 00:05:50.430 10:43:39 -- common/autotest_common.sh@829 -- # '[' -z 1292980 ']' 00:05:50.430 10:43:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.430 10:43:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.430 10:43:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.430 10:43:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.430 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:05:50.690 10:43:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.690 10:43:39 -- common/autotest_common.sh@862 -- # return 0 00:05:50.690 10:43:39 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:50.690 10:43:39 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:50.690 10:43:39 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:50.690 10:43:39 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:50.690 00:05:50.690 real 0m2.154s 00:05:50.690 user 0m0.904s 00:05:50.690 sys 0m0.175s 00:05:50.690 10:43:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.690 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:05:50.690 ************************************ 00:05:50.690 END TEST locking_overlapped_coremask_via_rpc 00:05:50.690 ************************************ 00:05:50.690 10:43:39 -- event/cpu_locks.sh@174 -- # cleanup 00:05:50.690 10:43:39 -- event/cpu_locks.sh@15 -- # [[ -z 1292920 ]] 00:05:50.690 10:43:39 -- event/cpu_locks.sh@15 -- # killprocess 1292920 00:05:50.690 10:43:39 -- common/autotest_common.sh@936 -- # '[' -z 1292920 ']' 00:05:50.690 10:43:39 -- common/autotest_common.sh@940 -- # kill -0 1292920 00:05:50.690 10:43:39 -- common/autotest_common.sh@941 -- # uname 00:05:50.690 10:43:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:50.690 10:43:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1292920 00:05:50.690 10:43:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:50.690 10:43:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:50.690 10:43:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1292920' 00:05:50.690 killing process with pid 1292920 00:05:50.690 10:43:39 -- common/autotest_common.sh@955 -- # kill 1292920 00:05:50.690 10:43:39 -- common/autotest_common.sh@960 -- # wait 1292920 00:05:51.259 10:43:39 -- event/cpu_locks.sh@16 -- # [[ -z 1292980 ]] 00:05:51.259 10:43:39 -- event/cpu_locks.sh@16 -- # killprocess 1292980 00:05:51.259 10:43:39 -- common/autotest_common.sh@936 -- # '[' -z 1292980 ']' 00:05:51.259 10:43:39 -- common/autotest_common.sh@940 -- # kill -0 1292980 00:05:51.259 10:43:39 -- common/autotest_common.sh@941 -- # uname 00:05:51.259 10:43:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.259 10:43:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1292980 00:05:51.259 10:43:40 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:51.259 10:43:40 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:51.259 10:43:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1292980' 00:05:51.259 killing process with pid 1292980 00:05:51.259 10:43:40 -- common/autotest_common.sh@955 -- # kill 1292980 00:05:51.259 10:43:40 -- common/autotest_common.sh@960 -- # wait 1292980 00:05:51.519 10:43:40 -- event/cpu_locks.sh@18 -- # rm -f 00:05:51.519 10:43:40 -- event/cpu_locks.sh@1 -- # cleanup 00:05:51.519 10:43:40 -- event/cpu_locks.sh@15 -- # [[ -z 1292920 ]] 00:05:51.519 10:43:40 -- event/cpu_locks.sh@15 -- # killprocess 1292920 00:05:51.519 10:43:40 -- common/autotest_common.sh@936 -- # '[' -z 1292920 ']' 00:05:51.519 10:43:40 -- common/autotest_common.sh@940 -- # kill -0 1292920 00:05:51.519 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1292920) - No such process 00:05:51.519 10:43:40 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1292920 is not found' 00:05:51.519 Process with pid 1292920 is not found 00:05:51.519 10:43:40 -- event/cpu_locks.sh@16 -- # [[ -z 1292980 ]] 00:05:51.519 10:43:40 -- event/cpu_locks.sh@16 -- # killprocess 1292980 00:05:51.519 10:43:40 -- common/autotest_common.sh@936 -- # '[' -z 1292980 ']' 00:05:51.519 10:43:40 -- common/autotest_common.sh@940 -- # kill -0 1292980 00:05:51.519 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1292980) - No such process 00:05:51.519 10:43:40 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1292980 is not found' 00:05:51.519 Process with pid 1292980 is not found 00:05:51.519 10:43:40 -- event/cpu_locks.sh@18 -- # rm -f 00:05:51.519 00:05:51.519 real 0m18.371s 00:05:51.519 user 0m31.216s 00:05:51.519 sys 0m5.786s 00:05:51.519 10:43:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.519 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:51.519 ************************************ 00:05:51.519 END TEST cpu_locks 00:05:51.519 ************************************ 00:05:51.519 00:05:51.519 real 0m44.089s 00:05:51.519 user 1m23.450s 00:05:51.519 sys 0m9.877s 00:05:51.519 10:43:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.519 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:51.519 ************************************ 00:05:51.519 END TEST event 00:05:51.519 ************************************ 00:05:51.519 10:43:40 -- spdk/autotest.sh@175 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:51.519 10:43:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.519 10:43:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.519 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:51.519 ************************************ 00:05:51.519 START TEST thread 00:05:51.519 ************************************ 00:05:51.519 10:43:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:51.777 * Looking for test storage... 00:05:51.777 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:05:51.778 10:43:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:51.778 10:43:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:51.778 10:43:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:51.778 10:43:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:51.778 10:43:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:51.778 10:43:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:51.778 10:43:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:51.778 10:43:40 -- scripts/common.sh@335 -- # IFS=.-: 00:05:51.778 10:43:40 -- scripts/common.sh@335 -- # read -ra ver1 00:05:51.778 10:43:40 -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.778 10:43:40 -- scripts/common.sh@336 -- # read -ra ver2 00:05:51.778 10:43:40 -- scripts/common.sh@337 -- # local 'op=<' 00:05:51.778 10:43:40 -- scripts/common.sh@339 -- # ver1_l=2 00:05:51.778 10:43:40 -- scripts/common.sh@340 -- # ver2_l=1 00:05:51.778 10:43:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:51.778 10:43:40 -- scripts/common.sh@343 -- # case "$op" in 00:05:51.778 10:43:40 -- scripts/common.sh@344 -- # : 1 00:05:51.778 10:43:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:51.778 10:43:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.778 10:43:40 -- scripts/common.sh@364 -- # decimal 1 00:05:51.778 10:43:40 -- scripts/common.sh@352 -- # local d=1 00:05:51.778 10:43:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.778 10:43:40 -- scripts/common.sh@354 -- # echo 1 00:05:51.778 10:43:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:51.778 10:43:40 -- scripts/common.sh@365 -- # decimal 2 00:05:51.778 10:43:40 -- scripts/common.sh@352 -- # local d=2 00:05:51.778 10:43:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.778 10:43:40 -- scripts/common.sh@354 -- # echo 2 00:05:51.778 10:43:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:51.778 10:43:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:51.778 10:43:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:51.778 10:43:40 -- scripts/common.sh@367 -- # return 0 00:05:51.778 10:43:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.778 10:43:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:51.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.778 --rc genhtml_branch_coverage=1 00:05:51.778 --rc genhtml_function_coverage=1 00:05:51.778 --rc genhtml_legend=1 00:05:51.778 --rc geninfo_all_blocks=1 00:05:51.778 --rc geninfo_unexecuted_blocks=1 00:05:51.778 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:51.778 ' 00:05:51.778 10:43:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:51.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.778 --rc genhtml_branch_coverage=1 00:05:51.778 --rc genhtml_function_coverage=1 00:05:51.778 --rc genhtml_legend=1 00:05:51.778 --rc geninfo_all_blocks=1 00:05:51.778 --rc geninfo_unexecuted_blocks=1 00:05:51.778 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:51.778 ' 00:05:51.778 10:43:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:51.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.778 --rc genhtml_branch_coverage=1 00:05:51.778 --rc genhtml_function_coverage=1 00:05:51.778 --rc genhtml_legend=1 00:05:51.778 --rc geninfo_all_blocks=1 00:05:51.778 --rc geninfo_unexecuted_blocks=1 00:05:51.778 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:51.778 ' 00:05:51.778 10:43:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:51.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.778 --rc genhtml_branch_coverage=1 00:05:51.778 --rc genhtml_function_coverage=1 00:05:51.778 --rc genhtml_legend=1 00:05:51.778 --rc geninfo_all_blocks=1 00:05:51.778 --rc geninfo_unexecuted_blocks=1 00:05:51.778 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:51.778 ' 00:05:51.778 10:43:40 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:51.778 10:43:40 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:51.778 10:43:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.778 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:51.778 ************************************ 00:05:51.778 START TEST thread_poller_perf 00:05:51.778 ************************************ 00:05:51.778 10:43:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:51.778 [2024-12-15 10:43:40.684470] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.778 [2024-12-15 10:43:40.684535] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293569 ] 00:05:51.778 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.778 [2024-12-15 10:43:40.751866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.037 [2024-12-15 10:43:40.823664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.037 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:52.977 [2024-12-15T09:43:41.992Z] ====================================== 00:05:52.977 [2024-12-15T09:43:41.992Z] busy:2504223718 (cyc) 00:05:52.977 [2024-12-15T09:43:41.992Z] total_run_count: 807000 00:05:52.977 [2024-12-15T09:43:41.992Z] tsc_hz: 2500000000 (cyc) 00:05:52.977 [2024-12-15T09:43:41.992Z] ====================================== 00:05:52.977 [2024-12-15T09:43:41.992Z] poller_cost: 3103 (cyc), 1241 (nsec) 00:05:52.977 00:05:52.977 real 0m1.216s 00:05:52.977 user 0m1.132s 00:05:52.977 sys 0m0.079s 00:05:52.977 10:43:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.977 10:43:41 -- common/autotest_common.sh@10 -- # set +x 00:05:52.977 ************************************ 00:05:52.977 END TEST thread_poller_perf 00:05:52.977 ************************************ 00:05:52.977 10:43:41 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:52.977 10:43:41 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:52.977 10:43:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.977 10:43:41 -- common/autotest_common.sh@10 -- # set +x 00:05:52.977 ************************************ 00:05:52.977 START TEST thread_poller_perf 00:05:52.977 ************************************ 00:05:52.977 10:43:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:52.977 [2024-12-15 10:43:41.957155] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.977 [2024-12-15 10:43:41.957236] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293854 ] 00:05:53.236 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.236 [2024-12-15 10:43:42.026093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.236 [2024-12-15 10:43:42.093224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.236 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:54.173 [2024-12-15T09:43:43.188Z] ====================================== 00:05:54.173 [2024-12-15T09:43:43.188Z] busy:2501841882 (cyc) 00:05:54.173 [2024-12-15T09:43:43.188Z] total_run_count: 13647000 00:05:54.173 [2024-12-15T09:43:43.188Z] tsc_hz: 2500000000 (cyc) 00:05:54.173 [2024-12-15T09:43:43.188Z] ====================================== 00:05:54.173 [2024-12-15T09:43:43.188Z] poller_cost: 183 (cyc), 73 (nsec) 00:05:54.173 00:05:54.173 real 0m1.217s 00:05:54.173 user 0m1.129s 00:05:54.173 sys 0m0.082s 00:05:54.173 10:43:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.173 10:43:43 -- common/autotest_common.sh@10 -- # set +x 00:05:54.173 ************************************ 00:05:54.173 END TEST thread_poller_perf 00:05:54.173 ************************************ 00:05:54.432 10:43:43 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:05:54.432 10:43:43 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:54.432 10:43:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.432 10:43:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.432 10:43:43 -- common/autotest_common.sh@10 -- # set +x 00:05:54.432 ************************************ 00:05:54.432 START TEST thread_spdk_lock 00:05:54.432 ************************************ 00:05:54.432 10:43:43 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:54.432 [2024-12-15 10:43:43.225839] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.432 [2024-12-15 10:43:43.225931] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294073 ] 00:05:54.432 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.432 [2024-12-15 10:43:43.297327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.432 [2024-12-15 10:43:43.365076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.432 [2024-12-15 10:43:43.365079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.000 [2024-12-15 10:43:43.858154] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 957:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:55.000 [2024-12-15 10:43:43.858191] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:55.000 [2024-12-15 10:43:43.858201] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x1483c80 00:05:55.000 [2024-12-15 10:43:43.859131] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:55.000 [2024-12-15 10:43:43.859233] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1018:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:55.000 [2024-12-15 10:43:43.859252] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:55.000 Starting test contend 00:05:55.000 Worker Delay Wait us Hold us Total us 00:05:55.000 0 3 171143 187797 358940 00:05:55.000 1 5 90172 288003 378175 00:05:55.000 PASS test contend 00:05:55.000 Starting test hold_by_poller 00:05:55.000 PASS test hold_by_poller 00:05:55.000 Starting test hold_by_message 00:05:55.000 PASS test hold_by_message 00:05:55.000 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:05:55.000 100014 assertions passed 00:05:55.000 0 assertions failed 00:05:55.000 00:05:55.001 real 0m0.712s 00:05:55.001 user 0m1.113s 00:05:55.001 sys 0m0.090s 00:05:55.001 10:43:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.001 10:43:43 -- common/autotest_common.sh@10 -- # set +x 00:05:55.001 ************************************ 00:05:55.001 END TEST thread_spdk_lock 00:05:55.001 ************************************ 00:05:55.001 00:05:55.001 real 0m3.487s 00:05:55.001 user 0m3.533s 00:05:55.001 sys 0m0.478s 00:05:55.001 10:43:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.001 10:43:43 -- common/autotest_common.sh@10 -- # set +x 00:05:55.001 ************************************ 00:05:55.001 END TEST thread 00:05:55.001 ************************************ 00:05:55.001 10:43:43 -- spdk/autotest.sh@176 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:55.001 10:43:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.001 10:43:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.001 10:43:43 -- common/autotest_common.sh@10 -- # set +x 00:05:55.001 ************************************ 00:05:55.001 START TEST accel 00:05:55.001 ************************************ 00:05:55.001 10:43:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:55.260 * Looking for test storage... 00:05:55.260 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:05:55.260 10:43:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:55.260 10:43:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:55.260 10:43:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:55.260 10:43:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:55.260 10:43:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:55.260 10:43:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:55.260 10:43:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:55.260 10:43:44 -- scripts/common.sh@335 -- # IFS=.-: 00:05:55.260 10:43:44 -- scripts/common.sh@335 -- # read -ra ver1 00:05:55.260 10:43:44 -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.260 10:43:44 -- scripts/common.sh@336 -- # read -ra ver2 00:05:55.260 10:43:44 -- scripts/common.sh@337 -- # local 'op=<' 00:05:55.260 10:43:44 -- scripts/common.sh@339 -- # ver1_l=2 00:05:55.260 10:43:44 -- scripts/common.sh@340 -- # ver2_l=1 00:05:55.260 10:43:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:55.260 10:43:44 -- scripts/common.sh@343 -- # case "$op" in 00:05:55.260 10:43:44 -- scripts/common.sh@344 -- # : 1 00:05:55.260 10:43:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:55.260 10:43:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.260 10:43:44 -- scripts/common.sh@364 -- # decimal 1 00:05:55.260 10:43:44 -- scripts/common.sh@352 -- # local d=1 00:05:55.260 10:43:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.260 10:43:44 -- scripts/common.sh@354 -- # echo 1 00:05:55.260 10:43:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:55.260 10:43:44 -- scripts/common.sh@365 -- # decimal 2 00:05:55.260 10:43:44 -- scripts/common.sh@352 -- # local d=2 00:05:55.260 10:43:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.260 10:43:44 -- scripts/common.sh@354 -- # echo 2 00:05:55.260 10:43:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:55.261 10:43:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:55.261 10:43:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:55.261 10:43:44 -- scripts/common.sh@367 -- # return 0 00:05:55.261 10:43:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.261 10:43:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:55.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.261 --rc genhtml_branch_coverage=1 00:05:55.261 --rc genhtml_function_coverage=1 00:05:55.261 --rc genhtml_legend=1 00:05:55.261 --rc geninfo_all_blocks=1 00:05:55.261 --rc geninfo_unexecuted_blocks=1 00:05:55.261 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:55.261 ' 00:05:55.261 10:43:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:55.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.261 --rc genhtml_branch_coverage=1 00:05:55.261 --rc genhtml_function_coverage=1 00:05:55.261 --rc genhtml_legend=1 00:05:55.261 --rc geninfo_all_blocks=1 00:05:55.261 --rc geninfo_unexecuted_blocks=1 00:05:55.261 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:55.261 ' 00:05:55.261 10:43:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:55.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.261 --rc genhtml_branch_coverage=1 00:05:55.261 --rc genhtml_function_coverage=1 00:05:55.261 --rc genhtml_legend=1 00:05:55.261 --rc geninfo_all_blocks=1 00:05:55.261 --rc geninfo_unexecuted_blocks=1 00:05:55.261 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:55.261 ' 00:05:55.261 10:43:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:55.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.261 --rc genhtml_branch_coverage=1 00:05:55.261 --rc genhtml_function_coverage=1 00:05:55.261 --rc genhtml_legend=1 00:05:55.261 --rc geninfo_all_blocks=1 00:05:55.261 --rc geninfo_unexecuted_blocks=1 00:05:55.261 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:55.261 ' 00:05:55.261 10:43:44 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:55.261 10:43:44 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:55.261 10:43:44 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:55.261 10:43:44 -- accel/accel.sh@59 -- # spdk_tgt_pid=1294218 00:05:55.261 10:43:44 -- accel/accel.sh@58 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:55.261 10:43:44 -- accel/accel.sh@60 -- # waitforlisten 1294218 00:05:55.261 10:43:44 -- common/autotest_common.sh@829 -- # '[' -z 1294218 ']' 00:05:55.261 10:43:44 -- accel/accel.sh@58 -- # build_accel_config 00:05:55.261 10:43:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.261 10:43:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.261 10:43:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.261 10:43:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.261 10:43:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.261 10:43:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.261 10:43:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.261 10:43:44 -- common/autotest_common.sh@10 -- # set +x 00:05:55.261 10:43:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.261 10:43:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.261 10:43:44 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.261 10:43:44 -- accel/accel.sh@42 -- # jq -r . 00:05:55.261 [2024-12-15 10:43:44.190156] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.261 [2024-12-15 10:43:44.190225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294218 ] 00:05:55.261 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.261 [2024-12-15 10:43:44.257312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.520 [2024-12-15 10:43:44.327849] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.520 [2024-12-15 10:43:44.327959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.087 10:43:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.087 10:43:45 -- common/autotest_common.sh@862 -- # return 0 00:05:56.087 10:43:45 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:56.087 10:43:45 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:56.087 10:43:45 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:56.087 10:43:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.087 10:43:45 -- common/autotest_common.sh@10 -- # set +x 00:05:56.087 10:43:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.087 10:43:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # IFS== 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.087 10:43:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.087 10:43:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # IFS== 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.087 10:43:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.087 10:43:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # IFS== 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.087 10:43:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.087 10:43:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # IFS== 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.087 10:43:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.087 10:43:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # IFS== 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.087 10:43:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.087 10:43:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # IFS== 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.087 10:43:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.087 10:43:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # IFS== 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.087 10:43:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.087 10:43:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # IFS== 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.087 10:43:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.087 10:43:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # IFS== 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.087 10:43:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.087 10:43:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # IFS== 00:05:56.087 10:43:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.087 10:43:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.088 10:43:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.088 10:43:45 -- accel/accel.sh@64 -- # IFS== 00:05:56.088 10:43:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.088 10:43:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.088 10:43:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.088 10:43:45 -- accel/accel.sh@64 -- # IFS== 00:05:56.088 10:43:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.088 10:43:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.088 10:43:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.088 10:43:45 -- accel/accel.sh@64 -- # IFS== 00:05:56.088 10:43:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.088 10:43:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.088 10:43:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.088 10:43:45 -- accel/accel.sh@64 -- # IFS== 00:05:56.088 10:43:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.088 10:43:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.088 10:43:45 -- accel/accel.sh@67 -- # killprocess 1294218 00:05:56.088 10:43:45 -- common/autotest_common.sh@936 -- # '[' -z 1294218 ']' 00:05:56.088 10:43:45 -- common/autotest_common.sh@940 -- # kill -0 1294218 00:05:56.088 10:43:45 -- common/autotest_common.sh@941 -- # uname 00:05:56.088 10:43:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.088 10:43:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1294218 00:05:56.349 10:43:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.349 10:43:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.349 10:43:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1294218' 00:05:56.349 killing process with pid 1294218 00:05:56.349 10:43:45 -- common/autotest_common.sh@955 -- # kill 1294218 00:05:56.349 10:43:45 -- common/autotest_common.sh@960 -- # wait 1294218 00:05:56.610 10:43:45 -- accel/accel.sh@68 -- # trap - ERR 00:05:56.610 10:43:45 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:56.610 10:43:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:56.610 10:43:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.610 10:43:45 -- common/autotest_common.sh@10 -- # set +x 00:05:56.610 10:43:45 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:05:56.610 10:43:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:56.610 10:43:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.610 10:43:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.610 10:43:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.610 10:43:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.610 10:43:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.610 10:43:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.610 10:43:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.610 10:43:45 -- accel/accel.sh@42 -- # jq -r . 00:05:56.610 10:43:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.610 10:43:45 -- common/autotest_common.sh@10 -- # set +x 00:05:56.610 10:43:45 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:56.610 10:43:45 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:56.610 10:43:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.610 10:43:45 -- common/autotest_common.sh@10 -- # set +x 00:05:56.610 ************************************ 00:05:56.610 START TEST accel_missing_filename 00:05:56.610 ************************************ 00:05:56.610 10:43:45 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:05:56.610 10:43:45 -- common/autotest_common.sh@650 -- # local es=0 00:05:56.610 10:43:45 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:56.610 10:43:45 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:56.610 10:43:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.610 10:43:45 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:56.610 10:43:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.610 10:43:45 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:05:56.610 10:43:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:56.610 10:43:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.610 10:43:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.610 10:43:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.610 10:43:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.610 10:43:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.610 10:43:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.610 10:43:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.610 10:43:45 -- accel/accel.sh@42 -- # jq -r . 00:05:56.610 [2024-12-15 10:43:45.531488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.610 [2024-12-15 10:43:45.531579] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294518 ] 00:05:56.610 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.610 [2024-12-15 10:43:45.602919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.869 [2024-12-15 10:43:45.672846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.869 [2024-12-15 10:43:45.712334] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:56.869 [2024-12-15 10:43:45.772118] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:56.869 A filename is required. 00:05:56.869 10:43:45 -- common/autotest_common.sh@653 -- # es=234 00:05:56.869 10:43:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.869 10:43:45 -- common/autotest_common.sh@662 -- # es=106 00:05:56.869 10:43:45 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:56.869 10:43:45 -- common/autotest_common.sh@670 -- # es=1 00:05:56.869 10:43:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.869 00:05:56.869 real 0m0.328s 00:05:56.869 user 0m0.229s 00:05:56.869 sys 0m0.138s 00:05:56.869 10:43:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.869 10:43:45 -- common/autotest_common.sh@10 -- # set +x 00:05:56.869 ************************************ 00:05:56.869 END TEST accel_missing_filename 00:05:56.869 ************************************ 00:05:56.869 10:43:45 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:56.869 10:43:45 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:56.869 10:43:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.869 10:43:45 -- common/autotest_common.sh@10 -- # set +x 00:05:56.869 ************************************ 00:05:56.869 START TEST accel_compress_verify 00:05:56.869 ************************************ 00:05:56.869 10:43:45 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:56.869 10:43:45 -- common/autotest_common.sh@650 -- # local es=0 00:05:56.869 10:43:45 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:56.869 10:43:45 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:57.128 10:43:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.128 10:43:45 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:57.128 10:43:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.128 10:43:45 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:57.128 10:43:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:57.128 10:43:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.128 10:43:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.128 10:43:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.128 10:43:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.128 10:43:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.128 10:43:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.128 10:43:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.128 10:43:45 -- accel/accel.sh@42 -- # jq -r . 00:05:57.128 [2024-12-15 10:43:45.904319] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.128 [2024-12-15 10:43:45.904407] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294554 ] 00:05:57.128 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.128 [2024-12-15 10:43:45.977409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.128 [2024-12-15 10:43:46.045481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.128 [2024-12-15 10:43:46.085000] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.388 [2024-12-15 10:43:46.144924] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:57.388 00:05:57.388 Compression does not support the verify option, aborting. 00:05:57.388 10:43:46 -- common/autotest_common.sh@653 -- # es=161 00:05:57.388 10:43:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.388 10:43:46 -- common/autotest_common.sh@662 -- # es=33 00:05:57.388 10:43:46 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:57.388 10:43:46 -- common/autotest_common.sh@670 -- # es=1 00:05:57.388 10:43:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.388 00:05:57.388 real 0m0.330s 00:05:57.388 user 0m0.231s 00:05:57.388 sys 0m0.137s 00:05:57.388 10:43:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.388 10:43:46 -- common/autotest_common.sh@10 -- # set +x 00:05:57.388 ************************************ 00:05:57.388 END TEST accel_compress_verify 00:05:57.388 ************************************ 00:05:57.388 10:43:46 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:57.388 10:43:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:57.388 10:43:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.388 10:43:46 -- common/autotest_common.sh@10 -- # set +x 00:05:57.388 ************************************ 00:05:57.388 START TEST accel_wrong_workload 00:05:57.388 ************************************ 00:05:57.388 10:43:46 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:05:57.388 10:43:46 -- common/autotest_common.sh@650 -- # local es=0 00:05:57.388 10:43:46 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:57.388 10:43:46 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:57.388 10:43:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.388 10:43:46 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:57.388 10:43:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.388 10:43:46 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:05:57.388 10:43:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:57.388 10:43:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.389 10:43:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.389 10:43:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.389 10:43:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.389 10:43:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.389 10:43:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.389 10:43:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.389 10:43:46 -- accel/accel.sh@42 -- # jq -r . 00:05:57.389 Unsupported workload type: foobar 00:05:57.389 [2024-12-15 10:43:46.276132] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:57.389 accel_perf options: 00:05:57.389 [-h help message] 00:05:57.389 [-q queue depth per core] 00:05:57.389 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:57.389 [-T number of threads per core 00:05:57.389 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:57.389 [-t time in seconds] 00:05:57.389 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:57.389 [ dif_verify, , dif_generate, dif_generate_copy 00:05:57.389 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:57.389 [-l for compress/decompress workloads, name of uncompressed input file 00:05:57.389 [-S for crc32c workload, use this seed value (default 0) 00:05:57.389 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:57.389 [-f for fill workload, use this BYTE value (default 255) 00:05:57.389 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:57.389 [-y verify result if this switch is on] 00:05:57.389 [-a tasks to allocate per core (default: same value as -q)] 00:05:57.389 Can be used to spread operations across a wider range of memory. 00:05:57.389 10:43:46 -- common/autotest_common.sh@653 -- # es=1 00:05:57.389 10:43:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.389 10:43:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.389 10:43:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.389 00:05:57.389 real 0m0.026s 00:05:57.389 user 0m0.010s 00:05:57.389 sys 0m0.016s 00:05:57.389 10:43:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.389 10:43:46 -- common/autotest_common.sh@10 -- # set +x 00:05:57.389 ************************************ 00:05:57.389 END TEST accel_wrong_workload 00:05:57.389 ************************************ 00:05:57.389 Error: writing output failed: Broken pipe 00:05:57.389 10:43:46 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:57.389 10:43:46 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:57.389 10:43:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.389 10:43:46 -- common/autotest_common.sh@10 -- # set +x 00:05:57.389 ************************************ 00:05:57.389 START TEST accel_negative_buffers 00:05:57.389 ************************************ 00:05:57.389 10:43:46 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:57.389 10:43:46 -- common/autotest_common.sh@650 -- # local es=0 00:05:57.389 10:43:46 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:57.389 10:43:46 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:57.389 10:43:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.389 10:43:46 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:57.389 10:43:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.389 10:43:46 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:05:57.389 10:43:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:57.389 10:43:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.389 10:43:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.389 10:43:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.389 10:43:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.389 10:43:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.389 10:43:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.389 10:43:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.389 10:43:46 -- accel/accel.sh@42 -- # jq -r . 00:05:57.389 -x option must be non-negative. 00:05:57.389 [2024-12-15 10:43:46.341670] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:57.389 accel_perf options: 00:05:57.389 [-h help message] 00:05:57.389 [-q queue depth per core] 00:05:57.389 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:57.389 [-T number of threads per core 00:05:57.389 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:57.389 [-t time in seconds] 00:05:57.389 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:57.389 [ dif_verify, , dif_generate, dif_generate_copy 00:05:57.389 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:57.389 [-l for compress/decompress workloads, name of uncompressed input file 00:05:57.389 [-S for crc32c workload, use this seed value (default 0) 00:05:57.389 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:57.389 [-f for fill workload, use this BYTE value (default 255) 00:05:57.389 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:57.389 [-y verify result if this switch is on] 00:05:57.389 [-a tasks to allocate per core (default: same value as -q)] 00:05:57.389 Can be used to spread operations across a wider range of memory. 00:05:57.389 10:43:46 -- common/autotest_common.sh@653 -- # es=1 00:05:57.389 10:43:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.389 10:43:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.389 10:43:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.389 00:05:57.389 real 0m0.025s 00:05:57.389 user 0m0.017s 00:05:57.389 sys 0m0.008s 00:05:57.389 10:43:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.389 10:43:46 -- common/autotest_common.sh@10 -- # set +x 00:05:57.389 ************************************ 00:05:57.389 END TEST accel_negative_buffers 00:05:57.389 ************************************ 00:05:57.389 Error: writing output failed: Broken pipe 00:05:57.389 10:43:46 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:57.389 10:43:46 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:57.389 10:43:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.389 10:43:46 -- common/autotest_common.sh@10 -- # set +x 00:05:57.389 ************************************ 00:05:57.389 START TEST accel_crc32c 00:05:57.389 ************************************ 00:05:57.389 10:43:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:57.389 10:43:46 -- accel/accel.sh@16 -- # local accel_opc 00:05:57.389 10:43:46 -- accel/accel.sh@17 -- # local accel_module 00:05:57.389 10:43:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:57.389 10:43:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:57.389 10:43:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.389 10:43:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.389 10:43:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.389 10:43:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.389 10:43:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.389 10:43:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.389 10:43:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.389 10:43:46 -- accel/accel.sh@42 -- # jq -r . 00:05:57.648 [2024-12-15 10:43:46.402405] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.648 [2024-12-15 10:43:46.402483] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294853 ] 00:05:57.648 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.648 [2024-12-15 10:43:46.471103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.648 [2024-12-15 10:43:46.540171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.028 10:43:47 -- accel/accel.sh@18 -- # out=' 00:05:59.028 SPDK Configuration: 00:05:59.028 Core mask: 0x1 00:05:59.028 00:05:59.028 Accel Perf Configuration: 00:05:59.028 Workload Type: crc32c 00:05:59.028 CRC-32C seed: 32 00:05:59.028 Transfer size: 4096 bytes 00:05:59.028 Vector count 1 00:05:59.028 Module: software 00:05:59.028 Queue depth: 32 00:05:59.028 Allocate depth: 32 00:05:59.028 # threads/core: 1 00:05:59.028 Run time: 1 seconds 00:05:59.028 Verify: Yes 00:05:59.028 00:05:59.028 Running for 1 seconds... 00:05:59.028 00:05:59.028 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:59.028 ------------------------------------------------------------------------------------ 00:05:59.028 0,0 856192/s 3344 MiB/s 0 0 00:05:59.028 ==================================================================================== 00:05:59.028 Total 856192/s 3344 MiB/s 0 0' 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:05:59.028 10:43:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:59.028 10:43:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:59.028 10:43:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.028 10:43:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.028 10:43:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.028 10:43:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.028 10:43:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.028 10:43:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.028 10:43:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.028 10:43:47 -- accel/accel.sh@42 -- # jq -r . 00:05:59.028 [2024-12-15 10:43:47.729644] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.028 [2024-12-15 10:43:47.729736] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295023 ] 00:05:59.028 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.028 [2024-12-15 10:43:47.799128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.028 [2024-12-15 10:43:47.864796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.028 10:43:47 -- accel/accel.sh@21 -- # val= 00:05:59.028 10:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:05:59.028 10:43:47 -- accel/accel.sh@21 -- # val= 00:05:59.028 10:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:05:59.028 10:43:47 -- accel/accel.sh@21 -- # val=0x1 00:05:59.028 10:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:05:59.028 10:43:47 -- accel/accel.sh@21 -- # val= 00:05:59.028 10:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:05:59.028 10:43:47 -- accel/accel.sh@21 -- # val= 00:05:59.028 10:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:05:59.028 10:43:47 -- accel/accel.sh@21 -- # val=crc32c 00:05:59.028 10:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.028 10:43:47 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:05:59.028 10:43:47 -- accel/accel.sh@21 -- # val=32 00:05:59.028 10:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:05:59.028 10:43:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:59.028 10:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:05:59.028 10:43:47 -- accel/accel.sh@21 -- # val= 00:05:59.028 10:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:05:59.028 10:43:47 -- accel/accel.sh@21 -- # val=software 00:05:59.028 10:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.028 10:43:47 -- accel/accel.sh@23 -- # accel_module=software 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:05:59.028 10:43:47 -- accel/accel.sh@21 -- # val=32 00:05:59.028 10:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:05:59.028 10:43:47 -- accel/accel.sh@21 -- # val=32 00:05:59.028 10:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:05:59.028 10:43:47 -- accel/accel.sh@21 -- # val=1 00:05:59.028 10:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:05:59.028 10:43:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:59.028 10:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:05:59.028 10:43:47 -- accel/accel.sh@21 -- # val=Yes 00:05:59.028 10:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:05:59.028 10:43:47 -- accel/accel.sh@21 -- # val= 00:05:59.028 10:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:05:59.028 10:43:47 -- accel/accel.sh@21 -- # val= 00:05:59.028 10:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # IFS=: 00:05:59.028 10:43:47 -- accel/accel.sh@20 -- # read -r var val 00:06:00.408 10:43:49 -- accel/accel.sh@21 -- # val= 00:06:00.408 10:43:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.408 10:43:49 -- accel/accel.sh@20 -- # IFS=: 00:06:00.408 10:43:49 -- accel/accel.sh@20 -- # read -r var val 00:06:00.408 10:43:49 -- accel/accel.sh@21 -- # val= 00:06:00.408 10:43:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.408 10:43:49 -- accel/accel.sh@20 -- # IFS=: 00:06:00.408 10:43:49 -- accel/accel.sh@20 -- # read -r var val 00:06:00.408 10:43:49 -- accel/accel.sh@21 -- # val= 00:06:00.408 10:43:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.408 10:43:49 -- accel/accel.sh@20 -- # IFS=: 00:06:00.408 10:43:49 -- accel/accel.sh@20 -- # read -r var val 00:06:00.408 10:43:49 -- accel/accel.sh@21 -- # val= 00:06:00.408 10:43:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.408 10:43:49 -- accel/accel.sh@20 -- # IFS=: 00:06:00.408 10:43:49 -- accel/accel.sh@20 -- # read -r var val 00:06:00.408 10:43:49 -- accel/accel.sh@21 -- # val= 00:06:00.408 10:43:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.408 10:43:49 -- accel/accel.sh@20 -- # IFS=: 00:06:00.408 10:43:49 -- accel/accel.sh@20 -- # read -r var val 00:06:00.408 10:43:49 -- accel/accel.sh@21 -- # val= 00:06:00.408 10:43:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.408 10:43:49 -- accel/accel.sh@20 -- # IFS=: 00:06:00.408 10:43:49 -- accel/accel.sh@20 -- # read -r var val 00:06:00.408 10:43:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:00.408 10:43:49 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:00.408 10:43:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.408 00:06:00.408 real 0m2.646s 00:06:00.408 user 0m2.383s 00:06:00.408 sys 0m0.264s 00:06:00.408 10:43:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.408 10:43:49 -- common/autotest_common.sh@10 -- # set +x 00:06:00.408 ************************************ 00:06:00.408 END TEST accel_crc32c 00:06:00.408 ************************************ 00:06:00.408 10:43:49 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:00.408 10:43:49 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:00.408 10:43:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.408 10:43:49 -- common/autotest_common.sh@10 -- # set +x 00:06:00.408 ************************************ 00:06:00.408 START TEST accel_crc32c_C2 00:06:00.408 ************************************ 00:06:00.408 10:43:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:00.408 10:43:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.408 10:43:49 -- accel/accel.sh@17 -- # local accel_module 00:06:00.408 10:43:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:00.408 10:43:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:00.408 10:43:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.408 10:43:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.408 10:43:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.408 10:43:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.408 10:43:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.408 10:43:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.408 10:43:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.408 10:43:49 -- accel/accel.sh@42 -- # jq -r . 00:06:00.408 [2024-12-15 10:43:49.101706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.408 [2024-12-15 10:43:49.101792] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295228 ] 00:06:00.408 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.408 [2024-12-15 10:43:49.171756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.408 [2024-12-15 10:43:49.239517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.786 10:43:50 -- accel/accel.sh@18 -- # out=' 00:06:01.786 SPDK Configuration: 00:06:01.786 Core mask: 0x1 00:06:01.786 00:06:01.786 Accel Perf Configuration: 00:06:01.786 Workload Type: crc32c 00:06:01.786 CRC-32C seed: 0 00:06:01.786 Transfer size: 4096 bytes 00:06:01.786 Vector count 2 00:06:01.786 Module: software 00:06:01.786 Queue depth: 32 00:06:01.786 Allocate depth: 32 00:06:01.786 # threads/core: 1 00:06:01.786 Run time: 1 seconds 00:06:01.786 Verify: Yes 00:06:01.786 00:06:01.786 Running for 1 seconds... 00:06:01.786 00:06:01.786 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:01.787 ------------------------------------------------------------------------------------ 00:06:01.787 0,0 611232/s 4775 MiB/s 0 0 00:06:01.787 ==================================================================================== 00:06:01.787 Total 611232/s 2387 MiB/s 0 0' 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.787 10:43:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:01.787 10:43:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:01.787 10:43:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.787 10:43:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.787 10:43:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.787 10:43:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.787 10:43:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.787 10:43:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.787 10:43:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.787 10:43:50 -- accel/accel.sh@42 -- # jq -r . 00:06:01.787 [2024-12-15 10:43:50.430441] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.787 [2024-12-15 10:43:50.430531] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295435 ] 00:06:01.787 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.787 [2024-12-15 10:43:50.501051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.787 [2024-12-15 10:43:50.568809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.787 10:43:50 -- accel/accel.sh@21 -- # val= 00:06:01.787 10:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.787 10:43:50 -- accel/accel.sh@21 -- # val= 00:06:01.787 10:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.787 10:43:50 -- accel/accel.sh@21 -- # val=0x1 00:06:01.787 10:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.787 10:43:50 -- accel/accel.sh@21 -- # val= 00:06:01.787 10:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.787 10:43:50 -- accel/accel.sh@21 -- # val= 00:06:01.787 10:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.787 10:43:50 -- accel/accel.sh@21 -- # val=crc32c 00:06:01.787 10:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.787 10:43:50 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.787 10:43:50 -- accel/accel.sh@21 -- # val=0 00:06:01.787 10:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.787 10:43:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:01.787 10:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.787 10:43:50 -- accel/accel.sh@21 -- # val= 00:06:01.787 10:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.787 10:43:50 -- accel/accel.sh@21 -- # val=software 00:06:01.787 10:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.787 10:43:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.787 10:43:50 -- accel/accel.sh@21 -- # val=32 00:06:01.787 10:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.787 10:43:50 -- accel/accel.sh@21 -- # val=32 00:06:01.787 10:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.787 10:43:50 -- accel/accel.sh@21 -- # val=1 00:06:01.787 10:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.787 10:43:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:01.787 10:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.787 10:43:50 -- accel/accel.sh@21 -- # val=Yes 00:06:01.787 10:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.787 10:43:50 -- accel/accel.sh@21 -- # val= 00:06:01.787 10:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.787 10:43:50 -- accel/accel.sh@21 -- # val= 00:06:01.787 10:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.787 10:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:02.725 10:43:51 -- accel/accel.sh@21 -- # val= 00:06:02.725 10:43:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.725 10:43:51 -- accel/accel.sh@20 -- # IFS=: 00:06:02.725 10:43:51 -- accel/accel.sh@20 -- # read -r var val 00:06:02.725 10:43:51 -- accel/accel.sh@21 -- # val= 00:06:02.725 10:43:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.725 10:43:51 -- accel/accel.sh@20 -- # IFS=: 00:06:02.725 10:43:51 -- accel/accel.sh@20 -- # read -r var val 00:06:02.725 10:43:51 -- accel/accel.sh@21 -- # val= 00:06:02.725 10:43:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.725 10:43:51 -- accel/accel.sh@20 -- # IFS=: 00:06:02.725 10:43:51 -- accel/accel.sh@20 -- # read -r var val 00:06:02.725 10:43:51 -- accel/accel.sh@21 -- # val= 00:06:02.725 10:43:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.725 10:43:51 -- accel/accel.sh@20 -- # IFS=: 00:06:02.725 10:43:51 -- accel/accel.sh@20 -- # read -r var val 00:06:02.725 10:43:51 -- accel/accel.sh@21 -- # val= 00:06:02.725 10:43:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.725 10:43:51 -- accel/accel.sh@20 -- # IFS=: 00:06:02.725 10:43:51 -- accel/accel.sh@20 -- # read -r var val 00:06:02.725 10:43:51 -- accel/accel.sh@21 -- # val= 00:06:02.725 10:43:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.725 10:43:51 -- accel/accel.sh@20 -- # IFS=: 00:06:02.725 10:43:51 -- accel/accel.sh@20 -- # read -r var val 00:06:02.725 10:43:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:02.725 10:43:51 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:02.725 10:43:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.725 00:06:02.725 real 0m2.658s 00:06:02.725 user 0m2.391s 00:06:02.725 sys 0m0.266s 00:06:02.725 10:43:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.725 10:43:51 -- common/autotest_common.sh@10 -- # set +x 00:06:02.725 ************************************ 00:06:02.725 END TEST accel_crc32c_C2 00:06:02.725 ************************************ 00:06:02.985 10:43:51 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:02.985 10:43:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:02.985 10:43:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.985 10:43:51 -- common/autotest_common.sh@10 -- # set +x 00:06:02.985 ************************************ 00:06:02.985 START TEST accel_copy 00:06:02.985 ************************************ 00:06:02.985 10:43:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:02.985 10:43:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.985 10:43:51 -- accel/accel.sh@17 -- # local accel_module 00:06:02.985 10:43:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:02.985 10:43:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:02.985 10:43:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.985 10:43:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.985 10:43:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.985 10:43:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.985 10:43:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.985 10:43:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.985 10:43:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.985 10:43:51 -- accel/accel.sh@42 -- # jq -r . 00:06:02.985 [2024-12-15 10:43:51.804371] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.985 [2024-12-15 10:43:51.804521] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295717 ] 00:06:02.985 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.985 [2024-12-15 10:43:51.875489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.985 [2024-12-15 10:43:51.942746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.364 10:43:53 -- accel/accel.sh@18 -- # out=' 00:06:04.364 SPDK Configuration: 00:06:04.364 Core mask: 0x1 00:06:04.364 00:06:04.364 Accel Perf Configuration: 00:06:04.364 Workload Type: copy 00:06:04.364 Transfer size: 4096 bytes 00:06:04.364 Vector count 1 00:06:04.364 Module: software 00:06:04.364 Queue depth: 32 00:06:04.364 Allocate depth: 32 00:06:04.364 # threads/core: 1 00:06:04.364 Run time: 1 seconds 00:06:04.364 Verify: Yes 00:06:04.364 00:06:04.364 Running for 1 seconds... 00:06:04.364 00:06:04.364 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:04.364 ------------------------------------------------------------------------------------ 00:06:04.364 0,0 538848/s 2104 MiB/s 0 0 00:06:04.364 ==================================================================================== 00:06:04.364 Total 538848/s 2104 MiB/s 0 0' 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:04.364 10:43:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:04.364 10:43:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:04.364 10:43:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.364 10:43:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.364 10:43:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.364 10:43:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.364 10:43:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.364 10:43:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.364 10:43:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.364 10:43:53 -- accel/accel.sh@42 -- # jq -r . 00:06:04.364 [2024-12-15 10:43:53.131045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.364 [2024-12-15 10:43:53.131131] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295988 ] 00:06:04.364 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.364 [2024-12-15 10:43:53.200202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.364 [2024-12-15 10:43:53.266053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.364 10:43:53 -- accel/accel.sh@21 -- # val= 00:06:04.364 10:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:04.364 10:43:53 -- accel/accel.sh@21 -- # val= 00:06:04.364 10:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:04.364 10:43:53 -- accel/accel.sh@21 -- # val=0x1 00:06:04.364 10:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:04.364 10:43:53 -- accel/accel.sh@21 -- # val= 00:06:04.364 10:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:04.364 10:43:53 -- accel/accel.sh@21 -- # val= 00:06:04.364 10:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:04.364 10:43:53 -- accel/accel.sh@21 -- # val=copy 00:06:04.364 10:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.364 10:43:53 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:04.364 10:43:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:04.364 10:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:04.364 10:43:53 -- accel/accel.sh@21 -- # val= 00:06:04.364 10:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:04.364 10:43:53 -- accel/accel.sh@21 -- # val=software 00:06:04.364 10:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.364 10:43:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:04.364 10:43:53 -- accel/accel.sh@21 -- # val=32 00:06:04.364 10:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:04.364 10:43:53 -- accel/accel.sh@21 -- # val=32 00:06:04.364 10:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:04.364 10:43:53 -- accel/accel.sh@21 -- # val=1 00:06:04.364 10:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:04.364 10:43:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:04.364 10:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.364 10:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:04.365 10:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:04.365 10:43:53 -- accel/accel.sh@21 -- # val=Yes 00:06:04.365 10:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.365 10:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:04.365 10:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:04.365 10:43:53 -- accel/accel.sh@21 -- # val= 00:06:04.365 10:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.365 10:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:04.365 10:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:04.365 10:43:53 -- accel/accel.sh@21 -- # val= 00:06:04.365 10:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.365 10:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:04.365 10:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:05.743 10:43:54 -- accel/accel.sh@21 -- # val= 00:06:05.743 10:43:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.743 10:43:54 -- accel/accel.sh@20 -- # IFS=: 00:06:05.743 10:43:54 -- accel/accel.sh@20 -- # read -r var val 00:06:05.743 10:43:54 -- accel/accel.sh@21 -- # val= 00:06:05.743 10:43:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.743 10:43:54 -- accel/accel.sh@20 -- # IFS=: 00:06:05.743 10:43:54 -- accel/accel.sh@20 -- # read -r var val 00:06:05.743 10:43:54 -- accel/accel.sh@21 -- # val= 00:06:05.743 10:43:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.743 10:43:54 -- accel/accel.sh@20 -- # IFS=: 00:06:05.743 10:43:54 -- accel/accel.sh@20 -- # read -r var val 00:06:05.743 10:43:54 -- accel/accel.sh@21 -- # val= 00:06:05.743 10:43:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.743 10:43:54 -- accel/accel.sh@20 -- # IFS=: 00:06:05.743 10:43:54 -- accel/accel.sh@20 -- # read -r var val 00:06:05.743 10:43:54 -- accel/accel.sh@21 -- # val= 00:06:05.743 10:43:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.743 10:43:54 -- accel/accel.sh@20 -- # IFS=: 00:06:05.743 10:43:54 -- accel/accel.sh@20 -- # read -r var val 00:06:05.743 10:43:54 -- accel/accel.sh@21 -- # val= 00:06:05.743 10:43:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.743 10:43:54 -- accel/accel.sh@20 -- # IFS=: 00:06:05.743 10:43:54 -- accel/accel.sh@20 -- # read -r var val 00:06:05.743 10:43:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:05.743 10:43:54 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:05.743 10:43:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.743 00:06:05.743 real 0m2.653s 00:06:05.743 user 0m2.387s 00:06:05.743 sys 0m0.264s 00:06:05.743 10:43:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.743 10:43:54 -- common/autotest_common.sh@10 -- # set +x 00:06:05.743 ************************************ 00:06:05.743 END TEST accel_copy 00:06:05.743 ************************************ 00:06:05.743 10:43:54 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.743 10:43:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:05.743 10:43:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.743 10:43:54 -- common/autotest_common.sh@10 -- # set +x 00:06:05.743 ************************************ 00:06:05.743 START TEST accel_fill 00:06:05.743 ************************************ 00:06:05.743 10:43:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.743 10:43:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.743 10:43:54 -- accel/accel.sh@17 -- # local accel_module 00:06:05.743 10:43:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.743 10:43:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.743 10:43:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.743 10:43:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.743 10:43:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.743 10:43:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.743 10:43:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.743 10:43:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.743 10:43:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.743 10:43:54 -- accel/accel.sh@42 -- # jq -r . 00:06:05.743 [2024-12-15 10:43:54.500840] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.743 [2024-12-15 10:43:54.500922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296271 ] 00:06:05.743 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.743 [2024-12-15 10:43:54.569525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.743 [2024-12-15 10:43:54.637190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.120 10:43:55 -- accel/accel.sh@18 -- # out=' 00:06:07.120 SPDK Configuration: 00:06:07.120 Core mask: 0x1 00:06:07.120 00:06:07.120 Accel Perf Configuration: 00:06:07.120 Workload Type: fill 00:06:07.120 Fill pattern: 0x80 00:06:07.120 Transfer size: 4096 bytes 00:06:07.120 Vector count 1 00:06:07.120 Module: software 00:06:07.120 Queue depth: 64 00:06:07.120 Allocate depth: 64 00:06:07.120 # threads/core: 1 00:06:07.120 Run time: 1 seconds 00:06:07.120 Verify: Yes 00:06:07.120 00:06:07.120 Running for 1 seconds... 00:06:07.120 00:06:07.120 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:07.120 ------------------------------------------------------------------------------------ 00:06:07.121 0,0 956480/s 3736 MiB/s 0 0 00:06:07.121 ==================================================================================== 00:06:07.121 Total 956480/s 3736 MiB/s 0 0' 00:06:07.121 10:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:07.121 10:43:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.121 10:43:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.121 10:43:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.121 10:43:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.121 10:43:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.121 10:43:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.121 10:43:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.121 10:43:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.121 10:43:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.121 10:43:55 -- accel/accel.sh@42 -- # jq -r . 00:06:07.121 [2024-12-15 10:43:55.827481] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.121 [2024-12-15 10:43:55.827571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296537 ] 00:06:07.121 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.121 [2024-12-15 10:43:55.898637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.121 [2024-12-15 10:43:55.964596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.121 10:43:56 -- accel/accel.sh@21 -- # val= 00:06:07.121 10:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:07.121 10:43:56 -- accel/accel.sh@21 -- # val= 00:06:07.121 10:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:07.121 10:43:56 -- accel/accel.sh@21 -- # val=0x1 00:06:07.121 10:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:07.121 10:43:56 -- accel/accel.sh@21 -- # val= 00:06:07.121 10:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:07.121 10:43:56 -- accel/accel.sh@21 -- # val= 00:06:07.121 10:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:07.121 10:43:56 -- accel/accel.sh@21 -- # val=fill 00:06:07.121 10:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.121 10:43:56 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:07.121 10:43:56 -- accel/accel.sh@21 -- # val=0x80 00:06:07.121 10:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:07.121 10:43:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:07.121 10:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:07.121 10:43:56 -- accel/accel.sh@21 -- # val= 00:06:07.121 10:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:07.121 10:43:56 -- accel/accel.sh@21 -- # val=software 00:06:07.121 10:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.121 10:43:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:07.121 10:43:56 -- accel/accel.sh@21 -- # val=64 00:06:07.121 10:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:07.121 10:43:56 -- accel/accel.sh@21 -- # val=64 00:06:07.121 10:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:07.121 10:43:56 -- accel/accel.sh@21 -- # val=1 00:06:07.121 10:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:07.121 10:43:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:07.121 10:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:07.121 10:43:56 -- accel/accel.sh@21 -- # val=Yes 00:06:07.121 10:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:07.121 10:43:56 -- accel/accel.sh@21 -- # val= 00:06:07.121 10:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:07.121 10:43:56 -- accel/accel.sh@21 -- # val= 00:06:07.121 10:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:07.121 10:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:08.575 10:43:57 -- accel/accel.sh@21 -- # val= 00:06:08.575 10:43:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.575 10:43:57 -- accel/accel.sh@20 -- # IFS=: 00:06:08.575 10:43:57 -- accel/accel.sh@20 -- # read -r var val 00:06:08.575 10:43:57 -- accel/accel.sh@21 -- # val= 00:06:08.575 10:43:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.575 10:43:57 -- accel/accel.sh@20 -- # IFS=: 00:06:08.575 10:43:57 -- accel/accel.sh@20 -- # read -r var val 00:06:08.575 10:43:57 -- accel/accel.sh@21 -- # val= 00:06:08.575 10:43:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.575 10:43:57 -- accel/accel.sh@20 -- # IFS=: 00:06:08.575 10:43:57 -- accel/accel.sh@20 -- # read -r var val 00:06:08.575 10:43:57 -- accel/accel.sh@21 -- # val= 00:06:08.575 10:43:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.575 10:43:57 -- accel/accel.sh@20 -- # IFS=: 00:06:08.575 10:43:57 -- accel/accel.sh@20 -- # read -r var val 00:06:08.575 10:43:57 -- accel/accel.sh@21 -- # val= 00:06:08.575 10:43:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.575 10:43:57 -- accel/accel.sh@20 -- # IFS=: 00:06:08.575 10:43:57 -- accel/accel.sh@20 -- # read -r var val 00:06:08.575 10:43:57 -- accel/accel.sh@21 -- # val= 00:06:08.575 10:43:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.575 10:43:57 -- accel/accel.sh@20 -- # IFS=: 00:06:08.575 10:43:57 -- accel/accel.sh@20 -- # read -r var val 00:06:08.575 10:43:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:08.575 10:43:57 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:08.575 10:43:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.575 00:06:08.575 real 0m2.657s 00:06:08.575 user 0m2.407s 00:06:08.575 sys 0m0.248s 00:06:08.575 10:43:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.575 10:43:57 -- common/autotest_common.sh@10 -- # set +x 00:06:08.575 ************************************ 00:06:08.575 END TEST accel_fill 00:06:08.576 ************************************ 00:06:08.576 10:43:57 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:08.576 10:43:57 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:08.576 10:43:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.576 10:43:57 -- common/autotest_common.sh@10 -- # set +x 00:06:08.576 ************************************ 00:06:08.576 START TEST accel_copy_crc32c 00:06:08.576 ************************************ 00:06:08.576 10:43:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:08.576 10:43:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.576 10:43:57 -- accel/accel.sh@17 -- # local accel_module 00:06:08.576 10:43:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:08.576 10:43:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:08.576 10:43:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.576 10:43:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.576 10:43:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.576 10:43:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.576 10:43:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.576 10:43:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.576 10:43:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.576 10:43:57 -- accel/accel.sh@42 -- # jq -r . 00:06:08.576 [2024-12-15 10:43:57.204020] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.576 [2024-12-15 10:43:57.204108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296835 ] 00:06:08.576 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.576 [2024-12-15 10:43:57.274269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.576 [2024-12-15 10:43:57.341088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.512 10:43:58 -- accel/accel.sh@18 -- # out=' 00:06:09.512 SPDK Configuration: 00:06:09.512 Core mask: 0x1 00:06:09.512 00:06:09.512 Accel Perf Configuration: 00:06:09.512 Workload Type: copy_crc32c 00:06:09.512 CRC-32C seed: 0 00:06:09.512 Vector size: 4096 bytes 00:06:09.512 Transfer size: 4096 bytes 00:06:09.512 Vector count 1 00:06:09.512 Module: software 00:06:09.512 Queue depth: 32 00:06:09.512 Allocate depth: 32 00:06:09.512 # threads/core: 1 00:06:09.512 Run time: 1 seconds 00:06:09.512 Verify: Yes 00:06:09.512 00:06:09.512 Running for 1 seconds... 00:06:09.512 00:06:09.512 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:09.512 ------------------------------------------------------------------------------------ 00:06:09.512 0,0 435104/s 1699 MiB/s 0 0 00:06:09.512 ==================================================================================== 00:06:09.512 Total 435104/s 1699 MiB/s 0 0' 00:06:09.512 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.512 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.512 10:43:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:09.512 10:43:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:09.512 10:43:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.512 10:43:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.512 10:43:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.512 10:43:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.512 10:43:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.512 10:43:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.512 10:43:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.512 10:43:58 -- accel/accel.sh@42 -- # jq -r . 00:06:09.772 [2024-12-15 10:43:58.529740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.772 [2024-12-15 10:43:58.529826] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297102 ] 00:06:09.772 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.772 [2024-12-15 10:43:58.598986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.772 [2024-12-15 10:43:58.664971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val= 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val= 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val=0x1 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val= 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val= 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val=0 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val= 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val=software 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val=32 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val=32 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val=1 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val=Yes 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val= 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:09.772 10:43:58 -- accel/accel.sh@21 -- # val= 00:06:09.772 10:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:09.772 10:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:11.151 10:43:59 -- accel/accel.sh@21 -- # val= 00:06:11.151 10:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.151 10:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:11.151 10:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:11.151 10:43:59 -- accel/accel.sh@21 -- # val= 00:06:11.151 10:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.151 10:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:11.151 10:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:11.151 10:43:59 -- accel/accel.sh@21 -- # val= 00:06:11.151 10:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.151 10:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:11.151 10:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:11.151 10:43:59 -- accel/accel.sh@21 -- # val= 00:06:11.151 10:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.151 10:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:11.151 10:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:11.151 10:43:59 -- accel/accel.sh@21 -- # val= 00:06:11.151 10:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.151 10:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:11.151 10:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:11.151 10:43:59 -- accel/accel.sh@21 -- # val= 00:06:11.151 10:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.151 10:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:11.151 10:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:11.151 10:43:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:11.151 10:43:59 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:11.151 10:43:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.151 00:06:11.151 real 0m2.652s 00:06:11.151 user 0m2.387s 00:06:11.151 sys 0m0.265s 00:06:11.151 10:43:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.151 10:43:59 -- common/autotest_common.sh@10 -- # set +x 00:06:11.152 ************************************ 00:06:11.152 END TEST accel_copy_crc32c 00:06:11.152 ************************************ 00:06:11.152 10:43:59 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:11.152 10:43:59 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:11.152 10:43:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.152 10:43:59 -- common/autotest_common.sh@10 -- # set +x 00:06:11.152 ************************************ 00:06:11.152 START TEST accel_copy_crc32c_C2 00:06:11.152 ************************************ 00:06:11.152 10:43:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:11.152 10:43:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:11.152 10:43:59 -- accel/accel.sh@17 -- # local accel_module 00:06:11.152 10:43:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:11.152 10:43:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:11.152 10:43:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.152 10:43:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.152 10:43:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.152 10:43:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.152 10:43:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.152 10:43:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.152 10:43:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.152 10:43:59 -- accel/accel.sh@42 -- # jq -r . 00:06:11.152 [2024-12-15 10:43:59.900649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.152 [2024-12-15 10:43:59.900728] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297385 ] 00:06:11.152 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.152 [2024-12-15 10:43:59.969074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.152 [2024-12-15 10:44:00.044465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.530 10:44:01 -- accel/accel.sh@18 -- # out=' 00:06:12.530 SPDK Configuration: 00:06:12.531 Core mask: 0x1 00:06:12.531 00:06:12.531 Accel Perf Configuration: 00:06:12.531 Workload Type: copy_crc32c 00:06:12.531 CRC-32C seed: 0 00:06:12.531 Vector size: 4096 bytes 00:06:12.531 Transfer size: 8192 bytes 00:06:12.531 Vector count 2 00:06:12.531 Module: software 00:06:12.531 Queue depth: 32 00:06:12.531 Allocate depth: 32 00:06:12.531 # threads/core: 1 00:06:12.531 Run time: 1 seconds 00:06:12.531 Verify: Yes 00:06:12.531 00:06:12.531 Running for 1 seconds... 00:06:12.531 00:06:12.531 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:12.531 ------------------------------------------------------------------------------------ 00:06:12.531 0,0 292768/s 2287 MiB/s 0 0 00:06:12.531 ==================================================================================== 00:06:12.531 Total 292768/s 1143 MiB/s 0 0' 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:12.531 10:44:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:12.531 10:44:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.531 10:44:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.531 10:44:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.531 10:44:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.531 10:44:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.531 10:44:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.531 10:44:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.531 10:44:01 -- accel/accel.sh@42 -- # jq -r . 00:06:12.531 [2024-12-15 10:44:01.233646] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.531 [2024-12-15 10:44:01.233738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297559 ] 00:06:12.531 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.531 [2024-12-15 10:44:01.301086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.531 [2024-12-15 10:44:01.366371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val= 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val= 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val=0x1 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val= 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val= 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val=0 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val= 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val=software 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val=32 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val=32 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val=1 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val=Yes 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val= 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.531 10:44:01 -- accel/accel.sh@21 -- # val= 00:06:12.531 10:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.531 10:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:13.910 10:44:02 -- accel/accel.sh@21 -- # val= 00:06:13.910 10:44:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.910 10:44:02 -- accel/accel.sh@20 -- # IFS=: 00:06:13.910 10:44:02 -- accel/accel.sh@20 -- # read -r var val 00:06:13.910 10:44:02 -- accel/accel.sh@21 -- # val= 00:06:13.910 10:44:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.910 10:44:02 -- accel/accel.sh@20 -- # IFS=: 00:06:13.910 10:44:02 -- accel/accel.sh@20 -- # read -r var val 00:06:13.910 10:44:02 -- accel/accel.sh@21 -- # val= 00:06:13.910 10:44:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.910 10:44:02 -- accel/accel.sh@20 -- # IFS=: 00:06:13.910 10:44:02 -- accel/accel.sh@20 -- # read -r var val 00:06:13.910 10:44:02 -- accel/accel.sh@21 -- # val= 00:06:13.910 10:44:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.910 10:44:02 -- accel/accel.sh@20 -- # IFS=: 00:06:13.910 10:44:02 -- accel/accel.sh@20 -- # read -r var val 00:06:13.910 10:44:02 -- accel/accel.sh@21 -- # val= 00:06:13.910 10:44:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.910 10:44:02 -- accel/accel.sh@20 -- # IFS=: 00:06:13.910 10:44:02 -- accel/accel.sh@20 -- # read -r var val 00:06:13.910 10:44:02 -- accel/accel.sh@21 -- # val= 00:06:13.910 10:44:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.910 10:44:02 -- accel/accel.sh@20 -- # IFS=: 00:06:13.910 10:44:02 -- accel/accel.sh@20 -- # read -r var val 00:06:13.910 10:44:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:13.910 10:44:02 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:13.910 10:44:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.910 00:06:13.910 real 0m2.657s 00:06:13.910 user 0m2.408s 00:06:13.911 sys 0m0.248s 00:06:13.911 10:44:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.911 10:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:13.911 ************************************ 00:06:13.911 END TEST accel_copy_crc32c_C2 00:06:13.911 ************************************ 00:06:13.911 10:44:02 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:13.911 10:44:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:13.911 10:44:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.911 10:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:13.911 ************************************ 00:06:13.911 START TEST accel_dualcast 00:06:13.911 ************************************ 00:06:13.911 10:44:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:13.911 10:44:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.911 10:44:02 -- accel/accel.sh@17 -- # local accel_module 00:06:13.911 10:44:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:13.911 10:44:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:13.911 10:44:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.911 10:44:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.911 10:44:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.911 10:44:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.911 10:44:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.911 10:44:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.911 10:44:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.911 10:44:02 -- accel/accel.sh@42 -- # jq -r . 00:06:13.911 [2024-12-15 10:44:02.602781] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.911 [2024-12-15 10:44:02.602869] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297761 ] 00:06:13.911 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.911 [2024-12-15 10:44:02.674708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.911 [2024-12-15 10:44:02.741965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.291 10:44:03 -- accel/accel.sh@18 -- # out=' 00:06:15.291 SPDK Configuration: 00:06:15.291 Core mask: 0x1 00:06:15.291 00:06:15.291 Accel Perf Configuration: 00:06:15.291 Workload Type: dualcast 00:06:15.291 Transfer size: 4096 bytes 00:06:15.291 Vector count 1 00:06:15.291 Module: software 00:06:15.291 Queue depth: 32 00:06:15.291 Allocate depth: 32 00:06:15.291 # threads/core: 1 00:06:15.291 Run time: 1 seconds 00:06:15.291 Verify: Yes 00:06:15.291 00:06:15.291 Running for 1 seconds... 00:06:15.291 00:06:15.291 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:15.291 ------------------------------------------------------------------------------------ 00:06:15.291 0,0 635840/s 2483 MiB/s 0 0 00:06:15.291 ==================================================================================== 00:06:15.291 Total 635840/s 2483 MiB/s 0 0' 00:06:15.291 10:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:15.291 10:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:15.291 10:44:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:15.291 10:44:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:15.291 10:44:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.291 10:44:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.291 10:44:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.291 10:44:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.291 10:44:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.291 10:44:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.291 10:44:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.291 10:44:03 -- accel/accel.sh@42 -- # jq -r . 00:06:15.291 [2024-12-15 10:44:03.929004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.291 [2024-12-15 10:44:03.929092] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297967 ] 00:06:15.291 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.291 [2024-12-15 10:44:03.997861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.291 [2024-12-15 10:44:04.063999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.291 10:44:04 -- accel/accel.sh@21 -- # val= 00:06:15.291 10:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # IFS=: 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # read -r var val 00:06:15.291 10:44:04 -- accel/accel.sh@21 -- # val= 00:06:15.291 10:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # IFS=: 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # read -r var val 00:06:15.291 10:44:04 -- accel/accel.sh@21 -- # val=0x1 00:06:15.291 10:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # IFS=: 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # read -r var val 00:06:15.291 10:44:04 -- accel/accel.sh@21 -- # val= 00:06:15.291 10:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # IFS=: 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # read -r var val 00:06:15.291 10:44:04 -- accel/accel.sh@21 -- # val= 00:06:15.291 10:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # IFS=: 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # read -r var val 00:06:15.291 10:44:04 -- accel/accel.sh@21 -- # val=dualcast 00:06:15.291 10:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.291 10:44:04 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # IFS=: 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # read -r var val 00:06:15.291 10:44:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:15.291 10:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # IFS=: 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # read -r var val 00:06:15.291 10:44:04 -- accel/accel.sh@21 -- # val= 00:06:15.291 10:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # IFS=: 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # read -r var val 00:06:15.291 10:44:04 -- accel/accel.sh@21 -- # val=software 00:06:15.291 10:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.291 10:44:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # IFS=: 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # read -r var val 00:06:15.291 10:44:04 -- accel/accel.sh@21 -- # val=32 00:06:15.291 10:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # IFS=: 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # read -r var val 00:06:15.291 10:44:04 -- accel/accel.sh@21 -- # val=32 00:06:15.291 10:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # IFS=: 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # read -r var val 00:06:15.291 10:44:04 -- accel/accel.sh@21 -- # val=1 00:06:15.291 10:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # IFS=: 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # read -r var val 00:06:15.291 10:44:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:15.291 10:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # IFS=: 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # read -r var val 00:06:15.291 10:44:04 -- accel/accel.sh@21 -- # val=Yes 00:06:15.291 10:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # IFS=: 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # read -r var val 00:06:15.291 10:44:04 -- accel/accel.sh@21 -- # val= 00:06:15.291 10:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # IFS=: 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # read -r var val 00:06:15.291 10:44:04 -- accel/accel.sh@21 -- # val= 00:06:15.291 10:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # IFS=: 00:06:15.291 10:44:04 -- accel/accel.sh@20 -- # read -r var val 00:06:16.236 10:44:05 -- accel/accel.sh@21 -- # val= 00:06:16.236 10:44:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.236 10:44:05 -- accel/accel.sh@20 -- # IFS=: 00:06:16.236 10:44:05 -- accel/accel.sh@20 -- # read -r var val 00:06:16.237 10:44:05 -- accel/accel.sh@21 -- # val= 00:06:16.237 10:44:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.237 10:44:05 -- accel/accel.sh@20 -- # IFS=: 00:06:16.237 10:44:05 -- accel/accel.sh@20 -- # read -r var val 00:06:16.237 10:44:05 -- accel/accel.sh@21 -- # val= 00:06:16.237 10:44:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.237 10:44:05 -- accel/accel.sh@20 -- # IFS=: 00:06:16.237 10:44:05 -- accel/accel.sh@20 -- # read -r var val 00:06:16.237 10:44:05 -- accel/accel.sh@21 -- # val= 00:06:16.237 10:44:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.237 10:44:05 -- accel/accel.sh@20 -- # IFS=: 00:06:16.237 10:44:05 -- accel/accel.sh@20 -- # read -r var val 00:06:16.237 10:44:05 -- accel/accel.sh@21 -- # val= 00:06:16.237 10:44:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.237 10:44:05 -- accel/accel.sh@20 -- # IFS=: 00:06:16.237 10:44:05 -- accel/accel.sh@20 -- # read -r var val 00:06:16.237 10:44:05 -- accel/accel.sh@21 -- # val= 00:06:16.237 10:44:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.237 10:44:05 -- accel/accel.sh@20 -- # IFS=: 00:06:16.237 10:44:05 -- accel/accel.sh@20 -- # read -r var val 00:06:16.237 10:44:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:16.237 10:44:05 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:16.237 10:44:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.237 00:06:16.237 real 0m2.652s 00:06:16.237 user 0m2.408s 00:06:16.237 sys 0m0.242s 00:06:16.237 10:44:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.237 10:44:05 -- common/autotest_common.sh@10 -- # set +x 00:06:16.237 ************************************ 00:06:16.237 END TEST accel_dualcast 00:06:16.237 ************************************ 00:06:16.497 10:44:05 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:16.497 10:44:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:16.497 10:44:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.497 10:44:05 -- common/autotest_common.sh@10 -- # set +x 00:06:16.497 ************************************ 00:06:16.497 START TEST accel_compare 00:06:16.497 ************************************ 00:06:16.497 10:44:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:16.497 10:44:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.497 10:44:05 -- accel/accel.sh@17 -- # local accel_module 00:06:16.497 10:44:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:16.497 10:44:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:16.497 10:44:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.497 10:44:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.497 10:44:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.497 10:44:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.497 10:44:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.497 10:44:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.497 10:44:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.497 10:44:05 -- accel/accel.sh@42 -- # jq -r . 00:06:16.497 [2024-12-15 10:44:05.298265] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.497 [2024-12-15 10:44:05.298349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298256 ] 00:06:16.497 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.497 [2024-12-15 10:44:05.368485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.497 [2024-12-15 10:44:05.436820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.876 10:44:06 -- accel/accel.sh@18 -- # out=' 00:06:17.876 SPDK Configuration: 00:06:17.876 Core mask: 0x1 00:06:17.876 00:06:17.876 Accel Perf Configuration: 00:06:17.876 Workload Type: compare 00:06:17.876 Transfer size: 4096 bytes 00:06:17.876 Vector count 1 00:06:17.876 Module: software 00:06:17.876 Queue depth: 32 00:06:17.876 Allocate depth: 32 00:06:17.876 # threads/core: 1 00:06:17.876 Run time: 1 seconds 00:06:17.876 Verify: Yes 00:06:17.876 00:06:17.876 Running for 1 seconds... 00:06:17.876 00:06:17.876 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:17.876 ------------------------------------------------------------------------------------ 00:06:17.876 0,0 814368/s 3181 MiB/s 0 0 00:06:17.876 ==================================================================================== 00:06:17.876 Total 814368/s 3181 MiB/s 0 0' 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:17.876 10:44:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:17.876 10:44:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:17.876 10:44:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.876 10:44:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.876 10:44:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.876 10:44:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.876 10:44:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.876 10:44:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.876 10:44:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.876 10:44:06 -- accel/accel.sh@42 -- # jq -r . 00:06:17.876 [2024-12-15 10:44:06.625629] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.876 [2024-12-15 10:44:06.625716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298524 ] 00:06:17.876 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.876 [2024-12-15 10:44:06.694353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.876 [2024-12-15 10:44:06.759686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.876 10:44:06 -- accel/accel.sh@21 -- # val= 00:06:17.876 10:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:17.876 10:44:06 -- accel/accel.sh@21 -- # val= 00:06:17.876 10:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:17.876 10:44:06 -- accel/accel.sh@21 -- # val=0x1 00:06:17.876 10:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:17.876 10:44:06 -- accel/accel.sh@21 -- # val= 00:06:17.876 10:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:17.876 10:44:06 -- accel/accel.sh@21 -- # val= 00:06:17.876 10:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:17.876 10:44:06 -- accel/accel.sh@21 -- # val=compare 00:06:17.876 10:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.876 10:44:06 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:17.876 10:44:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:17.876 10:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:17.876 10:44:06 -- accel/accel.sh@21 -- # val= 00:06:17.876 10:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:17.876 10:44:06 -- accel/accel.sh@21 -- # val=software 00:06:17.876 10:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.876 10:44:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:17.876 10:44:06 -- accel/accel.sh@21 -- # val=32 00:06:17.876 10:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:17.876 10:44:06 -- accel/accel.sh@21 -- # val=32 00:06:17.876 10:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:17.876 10:44:06 -- accel/accel.sh@21 -- # val=1 00:06:17.876 10:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:17.876 10:44:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:17.876 10:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:17.876 10:44:06 -- accel/accel.sh@21 -- # val=Yes 00:06:17.876 10:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:17.876 10:44:06 -- accel/accel.sh@21 -- # val= 00:06:17.876 10:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:17.876 10:44:06 -- accel/accel.sh@21 -- # val= 00:06:17.876 10:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:17.876 10:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:19.255 10:44:07 -- accel/accel.sh@21 -- # val= 00:06:19.255 10:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.255 10:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:19.255 10:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:19.255 10:44:07 -- accel/accel.sh@21 -- # val= 00:06:19.255 10:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.255 10:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:19.255 10:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:19.255 10:44:07 -- accel/accel.sh@21 -- # val= 00:06:19.255 10:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.255 10:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:19.255 10:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:19.255 10:44:07 -- accel/accel.sh@21 -- # val= 00:06:19.255 10:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.255 10:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:19.255 10:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:19.255 10:44:07 -- accel/accel.sh@21 -- # val= 00:06:19.255 10:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.255 10:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:19.255 10:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:19.255 10:44:07 -- accel/accel.sh@21 -- # val= 00:06:19.255 10:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.255 10:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:19.255 10:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:19.255 10:44:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:19.255 10:44:07 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:19.255 10:44:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.255 00:06:19.255 real 0m2.653s 00:06:19.255 user 0m2.402s 00:06:19.255 sys 0m0.249s 00:06:19.255 10:44:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.255 10:44:07 -- common/autotest_common.sh@10 -- # set +x 00:06:19.255 ************************************ 00:06:19.255 END TEST accel_compare 00:06:19.255 ************************************ 00:06:19.255 10:44:07 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:19.255 10:44:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:19.255 10:44:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.255 10:44:07 -- common/autotest_common.sh@10 -- # set +x 00:06:19.255 ************************************ 00:06:19.255 START TEST accel_xor 00:06:19.255 ************************************ 00:06:19.255 10:44:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:19.255 10:44:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.255 10:44:07 -- accel/accel.sh@17 -- # local accel_module 00:06:19.255 10:44:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:19.255 10:44:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:19.255 10:44:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.255 10:44:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.255 10:44:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.255 10:44:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.255 10:44:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.255 10:44:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.255 10:44:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.255 10:44:07 -- accel/accel.sh@42 -- # jq -r . 00:06:19.255 [2024-12-15 10:44:07.994525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.255 [2024-12-15 10:44:07.994600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298805 ] 00:06:19.255 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.255 [2024-12-15 10:44:08.062964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.255 [2024-12-15 10:44:08.129945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.634 10:44:09 -- accel/accel.sh@18 -- # out=' 00:06:20.634 SPDK Configuration: 00:06:20.634 Core mask: 0x1 00:06:20.634 00:06:20.634 Accel Perf Configuration: 00:06:20.634 Workload Type: xor 00:06:20.634 Source buffers: 2 00:06:20.634 Transfer size: 4096 bytes 00:06:20.634 Vector count 1 00:06:20.634 Module: software 00:06:20.634 Queue depth: 32 00:06:20.634 Allocate depth: 32 00:06:20.634 # threads/core: 1 00:06:20.634 Run time: 1 seconds 00:06:20.634 Verify: Yes 00:06:20.634 00:06:20.634 Running for 1 seconds... 00:06:20.634 00:06:20.634 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:20.634 ------------------------------------------------------------------------------------ 00:06:20.634 0,0 691712/s 2702 MiB/s 0 0 00:06:20.634 ==================================================================================== 00:06:20.635 Total 691712/s 2702 MiB/s 0 0' 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:20.635 10:44:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:20.635 10:44:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:20.635 10:44:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.635 10:44:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.635 10:44:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.635 10:44:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.635 10:44:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.635 10:44:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.635 10:44:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.635 10:44:09 -- accel/accel.sh@42 -- # jq -r . 00:06:20.635 [2024-12-15 10:44:09.321017] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.635 [2024-12-15 10:44:09.321104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299074 ] 00:06:20.635 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.635 [2024-12-15 10:44:09.391100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.635 [2024-12-15 10:44:09.456443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.635 10:44:09 -- accel/accel.sh@21 -- # val= 00:06:20.635 10:44:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:20.635 10:44:09 -- accel/accel.sh@21 -- # val= 00:06:20.635 10:44:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:20.635 10:44:09 -- accel/accel.sh@21 -- # val=0x1 00:06:20.635 10:44:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:20.635 10:44:09 -- accel/accel.sh@21 -- # val= 00:06:20.635 10:44:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:20.635 10:44:09 -- accel/accel.sh@21 -- # val= 00:06:20.635 10:44:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:20.635 10:44:09 -- accel/accel.sh@21 -- # val=xor 00:06:20.635 10:44:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.635 10:44:09 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:20.635 10:44:09 -- accel/accel.sh@21 -- # val=2 00:06:20.635 10:44:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:20.635 10:44:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:20.635 10:44:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:20.635 10:44:09 -- accel/accel.sh@21 -- # val= 00:06:20.635 10:44:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:20.635 10:44:09 -- accel/accel.sh@21 -- # val=software 00:06:20.635 10:44:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.635 10:44:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:20.635 10:44:09 -- accel/accel.sh@21 -- # val=32 00:06:20.635 10:44:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:20.635 10:44:09 -- accel/accel.sh@21 -- # val=32 00:06:20.635 10:44:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:20.635 10:44:09 -- accel/accel.sh@21 -- # val=1 00:06:20.635 10:44:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:20.635 10:44:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:20.635 10:44:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:20.635 10:44:09 -- accel/accel.sh@21 -- # val=Yes 00:06:20.635 10:44:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:20.635 10:44:09 -- accel/accel.sh@21 -- # val= 00:06:20.635 10:44:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:20.635 10:44:09 -- accel/accel.sh@21 -- # val= 00:06:20.635 10:44:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # IFS=: 00:06:20.635 10:44:09 -- accel/accel.sh@20 -- # read -r var val 00:06:22.012 10:44:10 -- accel/accel.sh@21 -- # val= 00:06:22.012 10:44:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.012 10:44:10 -- accel/accel.sh@20 -- # IFS=: 00:06:22.012 10:44:10 -- accel/accel.sh@20 -- # read -r var val 00:06:22.012 10:44:10 -- accel/accel.sh@21 -- # val= 00:06:22.012 10:44:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.012 10:44:10 -- accel/accel.sh@20 -- # IFS=: 00:06:22.012 10:44:10 -- accel/accel.sh@20 -- # read -r var val 00:06:22.012 10:44:10 -- accel/accel.sh@21 -- # val= 00:06:22.012 10:44:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.012 10:44:10 -- accel/accel.sh@20 -- # IFS=: 00:06:22.012 10:44:10 -- accel/accel.sh@20 -- # read -r var val 00:06:22.012 10:44:10 -- accel/accel.sh@21 -- # val= 00:06:22.012 10:44:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.012 10:44:10 -- accel/accel.sh@20 -- # IFS=: 00:06:22.012 10:44:10 -- accel/accel.sh@20 -- # read -r var val 00:06:22.012 10:44:10 -- accel/accel.sh@21 -- # val= 00:06:22.012 10:44:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.012 10:44:10 -- accel/accel.sh@20 -- # IFS=: 00:06:22.013 10:44:10 -- accel/accel.sh@20 -- # read -r var val 00:06:22.013 10:44:10 -- accel/accel.sh@21 -- # val= 00:06:22.013 10:44:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.013 10:44:10 -- accel/accel.sh@20 -- # IFS=: 00:06:22.013 10:44:10 -- accel/accel.sh@20 -- # read -r var val 00:06:22.013 10:44:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.013 10:44:10 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:22.013 10:44:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.013 00:06:22.013 real 0m2.655s 00:06:22.013 user 0m2.413s 00:06:22.013 sys 0m0.240s 00:06:22.013 10:44:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.013 10:44:10 -- common/autotest_common.sh@10 -- # set +x 00:06:22.013 ************************************ 00:06:22.013 END TEST accel_xor 00:06:22.013 ************************************ 00:06:22.013 10:44:10 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:22.013 10:44:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:22.013 10:44:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.013 10:44:10 -- common/autotest_common.sh@10 -- # set +x 00:06:22.013 ************************************ 00:06:22.013 START TEST accel_xor 00:06:22.013 ************************************ 00:06:22.013 10:44:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:22.013 10:44:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.013 10:44:10 -- accel/accel.sh@17 -- # local accel_module 00:06:22.013 10:44:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:22.013 10:44:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:22.013 10:44:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.013 10:44:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.013 10:44:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.013 10:44:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.013 10:44:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.013 10:44:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.013 10:44:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.013 10:44:10 -- accel/accel.sh@42 -- # jq -r . 00:06:22.013 [2024-12-15 10:44:10.693828] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.013 [2024-12-15 10:44:10.693909] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299362 ] 00:06:22.013 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.013 [2024-12-15 10:44:10.764670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.013 [2024-12-15 10:44:10.831887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.392 10:44:12 -- accel/accel.sh@18 -- # out=' 00:06:23.392 SPDK Configuration: 00:06:23.392 Core mask: 0x1 00:06:23.392 00:06:23.392 Accel Perf Configuration: 00:06:23.392 Workload Type: xor 00:06:23.392 Source buffers: 3 00:06:23.392 Transfer size: 4096 bytes 00:06:23.392 Vector count 1 00:06:23.392 Module: software 00:06:23.392 Queue depth: 32 00:06:23.392 Allocate depth: 32 00:06:23.392 # threads/core: 1 00:06:23.392 Run time: 1 seconds 00:06:23.392 Verify: Yes 00:06:23.392 00:06:23.392 Running for 1 seconds... 00:06:23.392 00:06:23.392 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:23.392 ------------------------------------------------------------------------------------ 00:06:23.392 0,0 673216/s 2629 MiB/s 0 0 00:06:23.392 ==================================================================================== 00:06:23.392 Total 673216/s 2629 MiB/s 0 0' 00:06:23.392 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.392 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.392 10:44:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:23.392 10:44:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:23.392 10:44:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.392 10:44:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.392 10:44:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.392 10:44:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.392 10:44:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.392 10:44:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.392 10:44:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.392 10:44:12 -- accel/accel.sh@42 -- # jq -r . 00:06:23.392 [2024-12-15 10:44:12.021185] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.393 [2024-12-15 10:44:12.021273] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299586 ] 00:06:23.393 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.393 [2024-12-15 10:44:12.090361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.393 [2024-12-15 10:44:12.156014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.393 10:44:12 -- accel/accel.sh@21 -- # val= 00:06:23.393 10:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.393 10:44:12 -- accel/accel.sh@21 -- # val= 00:06:23.393 10:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.393 10:44:12 -- accel/accel.sh@21 -- # val=0x1 00:06:23.393 10:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.393 10:44:12 -- accel/accel.sh@21 -- # val= 00:06:23.393 10:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.393 10:44:12 -- accel/accel.sh@21 -- # val= 00:06:23.393 10:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.393 10:44:12 -- accel/accel.sh@21 -- # val=xor 00:06:23.393 10:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.393 10:44:12 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.393 10:44:12 -- accel/accel.sh@21 -- # val=3 00:06:23.393 10:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.393 10:44:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.393 10:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.393 10:44:12 -- accel/accel.sh@21 -- # val= 00:06:23.393 10:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.393 10:44:12 -- accel/accel.sh@21 -- # val=software 00:06:23.393 10:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.393 10:44:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.393 10:44:12 -- accel/accel.sh@21 -- # val=32 00:06:23.393 10:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.393 10:44:12 -- accel/accel.sh@21 -- # val=32 00:06:23.393 10:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.393 10:44:12 -- accel/accel.sh@21 -- # val=1 00:06:23.393 10:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.393 10:44:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:23.393 10:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.393 10:44:12 -- accel/accel.sh@21 -- # val=Yes 00:06:23.393 10:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.393 10:44:12 -- accel/accel.sh@21 -- # val= 00:06:23.393 10:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.393 10:44:12 -- accel/accel.sh@21 -- # val= 00:06:23.393 10:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.393 10:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:24.330 10:44:13 -- accel/accel.sh@21 -- # val= 00:06:24.330 10:44:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.330 10:44:13 -- accel/accel.sh@20 -- # IFS=: 00:06:24.330 10:44:13 -- accel/accel.sh@20 -- # read -r var val 00:06:24.330 10:44:13 -- accel/accel.sh@21 -- # val= 00:06:24.330 10:44:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.330 10:44:13 -- accel/accel.sh@20 -- # IFS=: 00:06:24.330 10:44:13 -- accel/accel.sh@20 -- # read -r var val 00:06:24.330 10:44:13 -- accel/accel.sh@21 -- # val= 00:06:24.330 10:44:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.330 10:44:13 -- accel/accel.sh@20 -- # IFS=: 00:06:24.330 10:44:13 -- accel/accel.sh@20 -- # read -r var val 00:06:24.330 10:44:13 -- accel/accel.sh@21 -- # val= 00:06:24.330 10:44:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.330 10:44:13 -- accel/accel.sh@20 -- # IFS=: 00:06:24.330 10:44:13 -- accel/accel.sh@20 -- # read -r var val 00:06:24.330 10:44:13 -- accel/accel.sh@21 -- # val= 00:06:24.330 10:44:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.330 10:44:13 -- accel/accel.sh@20 -- # IFS=: 00:06:24.330 10:44:13 -- accel/accel.sh@20 -- # read -r var val 00:06:24.330 10:44:13 -- accel/accel.sh@21 -- # val= 00:06:24.330 10:44:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.330 10:44:13 -- accel/accel.sh@20 -- # IFS=: 00:06:24.330 10:44:13 -- accel/accel.sh@20 -- # read -r var val 00:06:24.330 10:44:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:24.330 10:44:13 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:24.330 10:44:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.330 00:06:24.330 real 0m2.654s 00:06:24.330 user 0m2.395s 00:06:24.330 sys 0m0.257s 00:06:24.330 10:44:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.330 10:44:13 -- common/autotest_common.sh@10 -- # set +x 00:06:24.330 ************************************ 00:06:24.330 END TEST accel_xor 00:06:24.330 ************************************ 00:06:24.590 10:44:13 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:24.590 10:44:13 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:24.590 10:44:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.590 10:44:13 -- common/autotest_common.sh@10 -- # set +x 00:06:24.590 ************************************ 00:06:24.590 START TEST accel_dif_verify 00:06:24.590 ************************************ 00:06:24.590 10:44:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:24.590 10:44:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.590 10:44:13 -- accel/accel.sh@17 -- # local accel_module 00:06:24.590 10:44:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:24.590 10:44:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:24.590 10:44:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.590 10:44:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.590 10:44:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.590 10:44:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.590 10:44:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.590 10:44:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.590 10:44:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.590 10:44:13 -- accel/accel.sh@42 -- # jq -r . 00:06:24.590 [2024-12-15 10:44:13.392499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.590 [2024-12-15 10:44:13.392582] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299784 ] 00:06:24.590 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.590 [2024-12-15 10:44:13.463698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.590 [2024-12-15 10:44:13.532104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.969 10:44:14 -- accel/accel.sh@18 -- # out=' 00:06:25.969 SPDK Configuration: 00:06:25.969 Core mask: 0x1 00:06:25.969 00:06:25.969 Accel Perf Configuration: 00:06:25.969 Workload Type: dif_verify 00:06:25.969 Vector size: 4096 bytes 00:06:25.969 Transfer size: 4096 bytes 00:06:25.969 Block size: 512 bytes 00:06:25.969 Metadata size: 8 bytes 00:06:25.969 Vector count 1 00:06:25.969 Module: software 00:06:25.969 Queue depth: 32 00:06:25.969 Allocate depth: 32 00:06:25.969 # threads/core: 1 00:06:25.969 Run time: 1 seconds 00:06:25.969 Verify: No 00:06:25.969 00:06:25.969 Running for 1 seconds... 00:06:25.969 00:06:25.969 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.969 ------------------------------------------------------------------------------------ 00:06:25.969 0,0 247808/s 983 MiB/s 0 0 00:06:25.969 ==================================================================================== 00:06:25.969 Total 247808/s 968 MiB/s 0 0' 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:25.969 10:44:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:25.969 10:44:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.969 10:44:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.969 10:44:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.969 10:44:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.969 10:44:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.969 10:44:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.969 10:44:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.969 10:44:14 -- accel/accel.sh@42 -- # jq -r . 00:06:25.969 [2024-12-15 10:44:14.721387] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.969 [2024-12-15 10:44:14.721485] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299956 ] 00:06:25.969 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.969 [2024-12-15 10:44:14.790371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.969 [2024-12-15 10:44:14.856770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val= 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val= 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val=0x1 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val= 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val= 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val=dif_verify 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val= 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val=software 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val=32 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val=32 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val=1 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val=No 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val= 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.969 10:44:14 -- accel/accel.sh@21 -- # val= 00:06:25.969 10:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:25.969 10:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:27.348 10:44:16 -- accel/accel.sh@21 -- # val= 00:06:27.348 10:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.348 10:44:16 -- accel/accel.sh@20 -- # IFS=: 00:06:27.348 10:44:16 -- accel/accel.sh@20 -- # read -r var val 00:06:27.348 10:44:16 -- accel/accel.sh@21 -- # val= 00:06:27.348 10:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.348 10:44:16 -- accel/accel.sh@20 -- # IFS=: 00:06:27.348 10:44:16 -- accel/accel.sh@20 -- # read -r var val 00:06:27.348 10:44:16 -- accel/accel.sh@21 -- # val= 00:06:27.348 10:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.348 10:44:16 -- accel/accel.sh@20 -- # IFS=: 00:06:27.348 10:44:16 -- accel/accel.sh@20 -- # read -r var val 00:06:27.348 10:44:16 -- accel/accel.sh@21 -- # val= 00:06:27.348 10:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.348 10:44:16 -- accel/accel.sh@20 -- # IFS=: 00:06:27.348 10:44:16 -- accel/accel.sh@20 -- # read -r var val 00:06:27.348 10:44:16 -- accel/accel.sh@21 -- # val= 00:06:27.348 10:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.348 10:44:16 -- accel/accel.sh@20 -- # IFS=: 00:06:27.348 10:44:16 -- accel/accel.sh@20 -- # read -r var val 00:06:27.348 10:44:16 -- accel/accel.sh@21 -- # val= 00:06:27.348 10:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.348 10:44:16 -- accel/accel.sh@20 -- # IFS=: 00:06:27.348 10:44:16 -- accel/accel.sh@20 -- # read -r var val 00:06:27.348 10:44:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:27.348 10:44:16 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:27.348 10:44:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.348 00:06:27.348 real 0m2.657s 00:06:27.348 user 0m2.418s 00:06:27.348 sys 0m0.239s 00:06:27.348 10:44:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.348 10:44:16 -- common/autotest_common.sh@10 -- # set +x 00:06:27.348 ************************************ 00:06:27.348 END TEST accel_dif_verify 00:06:27.348 ************************************ 00:06:27.348 10:44:16 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:27.348 10:44:16 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:27.348 10:44:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.348 10:44:16 -- common/autotest_common.sh@10 -- # set +x 00:06:27.348 ************************************ 00:06:27.348 START TEST accel_dif_generate 00:06:27.348 ************************************ 00:06:27.348 10:44:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:27.348 10:44:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.348 10:44:16 -- accel/accel.sh@17 -- # local accel_module 00:06:27.348 10:44:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:27.348 10:44:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:27.348 10:44:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.348 10:44:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.348 10:44:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.348 10:44:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.348 10:44:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.348 10:44:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.348 10:44:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.348 10:44:16 -- accel/accel.sh@42 -- # jq -r . 00:06:27.348 [2024-12-15 10:44:16.092756] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.348 [2024-12-15 10:44:16.092847] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300222 ] 00:06:27.348 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.348 [2024-12-15 10:44:16.164340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.348 [2024-12-15 10:44:16.231396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.727 10:44:17 -- accel/accel.sh@18 -- # out=' 00:06:28.727 SPDK Configuration: 00:06:28.727 Core mask: 0x1 00:06:28.727 00:06:28.727 Accel Perf Configuration: 00:06:28.727 Workload Type: dif_generate 00:06:28.727 Vector size: 4096 bytes 00:06:28.727 Transfer size: 4096 bytes 00:06:28.727 Block size: 512 bytes 00:06:28.727 Metadata size: 8 bytes 00:06:28.727 Vector count 1 00:06:28.727 Module: software 00:06:28.727 Queue depth: 32 00:06:28.727 Allocate depth: 32 00:06:28.727 # threads/core: 1 00:06:28.727 Run time: 1 seconds 00:06:28.727 Verify: No 00:06:28.727 00:06:28.727 Running for 1 seconds... 00:06:28.727 00:06:28.727 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:28.727 ------------------------------------------------------------------------------------ 00:06:28.727 0,0 289568/s 1148 MiB/s 0 0 00:06:28.727 ==================================================================================== 00:06:28.727 Total 289568/s 1131 MiB/s 0 0' 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.727 10:44:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:28.727 10:44:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:28.727 10:44:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.727 10:44:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.727 10:44:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.727 10:44:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.727 10:44:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.727 10:44:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.727 10:44:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.727 10:44:17 -- accel/accel.sh@42 -- # jq -r . 00:06:28.727 [2024-12-15 10:44:17.419786] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.727 [2024-12-15 10:44:17.419876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300490 ] 00:06:28.727 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.727 [2024-12-15 10:44:17.489250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.727 [2024-12-15 10:44:17.564536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.727 10:44:17 -- accel/accel.sh@21 -- # val= 00:06:28.727 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.727 10:44:17 -- accel/accel.sh@21 -- # val= 00:06:28.727 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.727 10:44:17 -- accel/accel.sh@21 -- # val=0x1 00:06:28.727 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.727 10:44:17 -- accel/accel.sh@21 -- # val= 00:06:28.727 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.727 10:44:17 -- accel/accel.sh@21 -- # val= 00:06:28.727 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.727 10:44:17 -- accel/accel.sh@21 -- # val=dif_generate 00:06:28.727 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.727 10:44:17 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.727 10:44:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.727 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.727 10:44:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.727 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.727 10:44:17 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:28.727 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.727 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.728 10:44:17 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:28.728 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.728 10:44:17 -- accel/accel.sh@21 -- # val= 00:06:28.728 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.728 10:44:17 -- accel/accel.sh@21 -- # val=software 00:06:28.728 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.728 10:44:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.728 10:44:17 -- accel/accel.sh@21 -- # val=32 00:06:28.728 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.728 10:44:17 -- accel/accel.sh@21 -- # val=32 00:06:28.728 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.728 10:44:17 -- accel/accel.sh@21 -- # val=1 00:06:28.728 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.728 10:44:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.728 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.728 10:44:17 -- accel/accel.sh@21 -- # val=No 00:06:28.728 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.728 10:44:17 -- accel/accel.sh@21 -- # val= 00:06:28.728 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.728 10:44:17 -- accel/accel.sh@21 -- # val= 00:06:28.728 10:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:28.728 10:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:30.107 10:44:18 -- accel/accel.sh@21 -- # val= 00:06:30.107 10:44:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.107 10:44:18 -- accel/accel.sh@20 -- # IFS=: 00:06:30.107 10:44:18 -- accel/accel.sh@20 -- # read -r var val 00:06:30.107 10:44:18 -- accel/accel.sh@21 -- # val= 00:06:30.107 10:44:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.107 10:44:18 -- accel/accel.sh@20 -- # IFS=: 00:06:30.107 10:44:18 -- accel/accel.sh@20 -- # read -r var val 00:06:30.107 10:44:18 -- accel/accel.sh@21 -- # val= 00:06:30.107 10:44:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.107 10:44:18 -- accel/accel.sh@20 -- # IFS=: 00:06:30.107 10:44:18 -- accel/accel.sh@20 -- # read -r var val 00:06:30.107 10:44:18 -- accel/accel.sh@21 -- # val= 00:06:30.107 10:44:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.107 10:44:18 -- accel/accel.sh@20 -- # IFS=: 00:06:30.107 10:44:18 -- accel/accel.sh@20 -- # read -r var val 00:06:30.107 10:44:18 -- accel/accel.sh@21 -- # val= 00:06:30.107 10:44:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.107 10:44:18 -- accel/accel.sh@20 -- # IFS=: 00:06:30.107 10:44:18 -- accel/accel.sh@20 -- # read -r var val 00:06:30.107 10:44:18 -- accel/accel.sh@21 -- # val= 00:06:30.107 10:44:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.107 10:44:18 -- accel/accel.sh@20 -- # IFS=: 00:06:30.107 10:44:18 -- accel/accel.sh@20 -- # read -r var val 00:06:30.107 10:44:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:30.107 10:44:18 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:30.107 10:44:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.107 00:06:30.107 real 0m2.669s 00:06:30.107 user 0m2.409s 00:06:30.107 sys 0m0.259s 00:06:30.107 10:44:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.107 10:44:18 -- common/autotest_common.sh@10 -- # set +x 00:06:30.107 ************************************ 00:06:30.107 END TEST accel_dif_generate 00:06:30.107 ************************************ 00:06:30.107 10:44:18 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:30.107 10:44:18 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:30.107 10:44:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.107 10:44:18 -- common/autotest_common.sh@10 -- # set +x 00:06:30.107 ************************************ 00:06:30.107 START TEST accel_dif_generate_copy 00:06:30.107 ************************************ 00:06:30.107 10:44:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:30.107 10:44:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.107 10:44:18 -- accel/accel.sh@17 -- # local accel_module 00:06:30.107 10:44:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:30.107 10:44:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:30.107 10:44:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.107 10:44:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.107 10:44:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.107 10:44:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.107 10:44:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.107 10:44:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.107 10:44:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.107 10:44:18 -- accel/accel.sh@42 -- # jq -r . 00:06:30.107 [2024-12-15 10:44:18.803290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.107 [2024-12-15 10:44:18.803377] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300782 ] 00:06:30.107 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.107 [2024-12-15 10:44:18.873106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.107 [2024-12-15 10:44:18.940290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.486 10:44:20 -- accel/accel.sh@18 -- # out=' 00:06:31.486 SPDK Configuration: 00:06:31.486 Core mask: 0x1 00:06:31.486 00:06:31.486 Accel Perf Configuration: 00:06:31.486 Workload Type: dif_generate_copy 00:06:31.486 Vector size: 4096 bytes 00:06:31.487 Transfer size: 4096 bytes 00:06:31.487 Vector count 1 00:06:31.487 Module: software 00:06:31.487 Queue depth: 32 00:06:31.487 Allocate depth: 32 00:06:31.487 # threads/core: 1 00:06:31.487 Run time: 1 seconds 00:06:31.487 Verify: No 00:06:31.487 00:06:31.487 Running for 1 seconds... 00:06:31.487 00:06:31.487 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.487 ------------------------------------------------------------------------------------ 00:06:31.487 0,0 225408/s 894 MiB/s 0 0 00:06:31.487 ==================================================================================== 00:06:31.487 Total 225408/s 880 MiB/s 0 0' 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.487 10:44:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:31.487 10:44:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.487 10:44:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.487 10:44:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.487 10:44:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.487 10:44:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.487 10:44:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.487 10:44:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.487 10:44:20 -- accel/accel.sh@42 -- # jq -r . 00:06:31.487 [2024-12-15 10:44:20.120952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.487 [2024-12-15 10:44:20.121002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301048 ] 00:06:31.487 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.487 [2024-12-15 10:44:20.182421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.487 [2024-12-15 10:44:20.251547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.487 10:44:20 -- accel/accel.sh@21 -- # val= 00:06:31.487 10:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.487 10:44:20 -- accel/accel.sh@21 -- # val= 00:06:31.487 10:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.487 10:44:20 -- accel/accel.sh@21 -- # val=0x1 00:06:31.487 10:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.487 10:44:20 -- accel/accel.sh@21 -- # val= 00:06:31.487 10:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.487 10:44:20 -- accel/accel.sh@21 -- # val= 00:06:31.487 10:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.487 10:44:20 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:31.487 10:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.487 10:44:20 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.487 10:44:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.487 10:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.487 10:44:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.487 10:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.487 10:44:20 -- accel/accel.sh@21 -- # val= 00:06:31.487 10:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.487 10:44:20 -- accel/accel.sh@21 -- # val=software 00:06:31.487 10:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.487 10:44:20 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.487 10:44:20 -- accel/accel.sh@21 -- # val=32 00:06:31.487 10:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.487 10:44:20 -- accel/accel.sh@21 -- # val=32 00:06:31.487 10:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.487 10:44:20 -- accel/accel.sh@21 -- # val=1 00:06:31.487 10:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.487 10:44:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.487 10:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.487 10:44:20 -- accel/accel.sh@21 -- # val=No 00:06:31.487 10:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.487 10:44:20 -- accel/accel.sh@21 -- # val= 00:06:31.487 10:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.487 10:44:20 -- accel/accel.sh@21 -- # val= 00:06:31.487 10:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.487 10:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:32.426 10:44:21 -- accel/accel.sh@21 -- # val= 00:06:32.426 10:44:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.426 10:44:21 -- accel/accel.sh@20 -- # IFS=: 00:06:32.426 10:44:21 -- accel/accel.sh@20 -- # read -r var val 00:06:32.426 10:44:21 -- accel/accel.sh@21 -- # val= 00:06:32.426 10:44:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.426 10:44:21 -- accel/accel.sh@20 -- # IFS=: 00:06:32.426 10:44:21 -- accel/accel.sh@20 -- # read -r var val 00:06:32.426 10:44:21 -- accel/accel.sh@21 -- # val= 00:06:32.426 10:44:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.426 10:44:21 -- accel/accel.sh@20 -- # IFS=: 00:06:32.426 10:44:21 -- accel/accel.sh@20 -- # read -r var val 00:06:32.426 10:44:21 -- accel/accel.sh@21 -- # val= 00:06:32.426 10:44:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.426 10:44:21 -- accel/accel.sh@20 -- # IFS=: 00:06:32.426 10:44:21 -- accel/accel.sh@20 -- # read -r var val 00:06:32.426 10:44:21 -- accel/accel.sh@21 -- # val= 00:06:32.426 10:44:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.426 10:44:21 -- accel/accel.sh@20 -- # IFS=: 00:06:32.426 10:44:21 -- accel/accel.sh@20 -- # read -r var val 00:06:32.426 10:44:21 -- accel/accel.sh@21 -- # val= 00:06:32.426 10:44:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.426 10:44:21 -- accel/accel.sh@20 -- # IFS=: 00:06:32.426 10:44:21 -- accel/accel.sh@20 -- # read -r var val 00:06:32.427 10:44:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.427 10:44:21 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:32.427 10:44:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.427 00:06:32.427 real 0m2.640s 00:06:32.427 user 0m2.401s 00:06:32.427 sys 0m0.237s 00:06:32.427 10:44:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.427 10:44:21 -- common/autotest_common.sh@10 -- # set +x 00:06:32.427 ************************************ 00:06:32.427 END TEST accel_dif_generate_copy 00:06:32.427 ************************************ 00:06:32.686 10:44:21 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:32.686 10:44:21 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:32.686 10:44:21 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:32.686 10:44:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.686 10:44:21 -- common/autotest_common.sh@10 -- # set +x 00:06:32.686 ************************************ 00:06:32.686 START TEST accel_comp 00:06:32.686 ************************************ 00:06:32.686 10:44:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:32.686 10:44:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.686 10:44:21 -- accel/accel.sh@17 -- # local accel_module 00:06:32.686 10:44:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:32.686 10:44:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:32.686 10:44:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.686 10:44:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.686 10:44:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.686 10:44:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.686 10:44:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.686 10:44:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.686 10:44:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.686 10:44:21 -- accel/accel.sh@42 -- # jq -r . 00:06:32.686 [2024-12-15 10:44:21.486459] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.686 [2024-12-15 10:44:21.486529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301335 ] 00:06:32.686 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.686 [2024-12-15 10:44:21.553677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.686 [2024-12-15 10:44:21.620650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.065 10:44:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:34.065 00:06:34.065 SPDK Configuration: 00:06:34.065 Core mask: 0x1 00:06:34.065 00:06:34.065 Accel Perf Configuration: 00:06:34.065 Workload Type: compress 00:06:34.065 Transfer size: 4096 bytes 00:06:34.065 Vector count 1 00:06:34.065 Module: software 00:06:34.065 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:34.065 Queue depth: 32 00:06:34.065 Allocate depth: 32 00:06:34.065 # threads/core: 1 00:06:34.065 Run time: 1 seconds 00:06:34.065 Verify: No 00:06:34.065 00:06:34.065 Running for 1 seconds... 00:06:34.065 00:06:34.065 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.065 ------------------------------------------------------------------------------------ 00:06:34.065 0,0 68000/s 283 MiB/s 0 0 00:06:34.065 ==================================================================================== 00:06:34.065 Total 68000/s 265 MiB/s 0 0' 00:06:34.065 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.065 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.065 10:44:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:34.065 10:44:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:34.065 10:44:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.065 10:44:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.065 10:44:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.065 10:44:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.065 10:44:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.065 10:44:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.065 10:44:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.065 10:44:22 -- accel/accel.sh@42 -- # jq -r . 00:06:34.065 [2024-12-15 10:44:22.812263] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.065 [2024-12-15 10:44:22.812349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301533 ] 00:06:34.065 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.065 [2024-12-15 10:44:22.882840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.065 [2024-12-15 10:44:22.947780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.065 10:44:22 -- accel/accel.sh@21 -- # val= 00:06:34.065 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.065 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.065 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.065 10:44:22 -- accel/accel.sh@21 -- # val= 00:06:34.065 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.065 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.065 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.065 10:44:22 -- accel/accel.sh@21 -- # val= 00:06:34.065 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.065 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.065 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.065 10:44:22 -- accel/accel.sh@21 -- # val=0x1 00:06:34.065 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.065 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.065 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.065 10:44:22 -- accel/accel.sh@21 -- # val= 00:06:34.065 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.065 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.065 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.065 10:44:22 -- accel/accel.sh@21 -- # val= 00:06:34.065 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.065 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.065 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.065 10:44:22 -- accel/accel.sh@21 -- # val=compress 00:06:34.065 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.065 10:44:22 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:34.065 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.066 10:44:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.066 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.066 10:44:22 -- accel/accel.sh@21 -- # val= 00:06:34.066 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.066 10:44:22 -- accel/accel.sh@21 -- # val=software 00:06:34.066 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.066 10:44:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.066 10:44:22 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:34.066 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.066 10:44:22 -- accel/accel.sh@21 -- # val=32 00:06:34.066 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.066 10:44:22 -- accel/accel.sh@21 -- # val=32 00:06:34.066 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.066 10:44:22 -- accel/accel.sh@21 -- # val=1 00:06:34.066 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.066 10:44:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.066 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.066 10:44:22 -- accel/accel.sh@21 -- # val=No 00:06:34.066 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.066 10:44:22 -- accel/accel.sh@21 -- # val= 00:06:34.066 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:34.066 10:44:22 -- accel/accel.sh@21 -- # val= 00:06:34.066 10:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:34.066 10:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:35.445 10:44:24 -- accel/accel.sh@21 -- # val= 00:06:35.445 10:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.445 10:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:35.445 10:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:35.445 10:44:24 -- accel/accel.sh@21 -- # val= 00:06:35.445 10:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.445 10:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:35.445 10:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:35.445 10:44:24 -- accel/accel.sh@21 -- # val= 00:06:35.445 10:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.445 10:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:35.445 10:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:35.445 10:44:24 -- accel/accel.sh@21 -- # val= 00:06:35.445 10:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.445 10:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:35.445 10:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:35.445 10:44:24 -- accel/accel.sh@21 -- # val= 00:06:35.445 10:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.445 10:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:35.445 10:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:35.445 10:44:24 -- accel/accel.sh@21 -- # val= 00:06:35.445 10:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.445 10:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:35.445 10:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:35.445 10:44:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.445 10:44:24 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:35.445 10:44:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.445 00:06:35.445 real 0m2.656s 00:06:35.445 user 0m2.407s 00:06:35.445 sys 0m0.248s 00:06:35.445 10:44:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.445 10:44:24 -- common/autotest_common.sh@10 -- # set +x 00:06:35.445 ************************************ 00:06:35.445 END TEST accel_comp 00:06:35.445 ************************************ 00:06:35.445 10:44:24 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:35.445 10:44:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:35.445 10:44:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.445 10:44:24 -- common/autotest_common.sh@10 -- # set +x 00:06:35.445 ************************************ 00:06:35.445 START TEST accel_decomp 00:06:35.445 ************************************ 00:06:35.445 10:44:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:35.445 10:44:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.445 10:44:24 -- accel/accel.sh@17 -- # local accel_module 00:06:35.445 10:44:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:35.445 10:44:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:35.445 10:44:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.445 10:44:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.445 10:44:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.445 10:44:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.445 10:44:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.445 10:44:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.445 10:44:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.445 10:44:24 -- accel/accel.sh@42 -- # jq -r . 00:06:35.445 [2024-12-15 10:44:24.186927] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.445 [2024-12-15 10:44:24.187019] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301737 ] 00:06:35.445 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.445 [2024-12-15 10:44:24.258378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.445 [2024-12-15 10:44:24.325278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.823 10:44:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:36.823 00:06:36.823 SPDK Configuration: 00:06:36.823 Core mask: 0x1 00:06:36.823 00:06:36.823 Accel Perf Configuration: 00:06:36.823 Workload Type: decompress 00:06:36.823 Transfer size: 4096 bytes 00:06:36.823 Vector count 1 00:06:36.823 Module: software 00:06:36.823 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:36.823 Queue depth: 32 00:06:36.823 Allocate depth: 32 00:06:36.823 # threads/core: 1 00:06:36.823 Run time: 1 seconds 00:06:36.823 Verify: Yes 00:06:36.823 00:06:36.823 Running for 1 seconds... 00:06:36.824 00:06:36.824 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.824 ------------------------------------------------------------------------------------ 00:06:36.824 0,0 92672/s 170 MiB/s 0 0 00:06:36.824 ==================================================================================== 00:06:36.824 Total 92672/s 362 MiB/s 0 0' 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:36.824 10:44:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:36.824 10:44:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.824 10:44:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.824 10:44:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.824 10:44:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.824 10:44:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.824 10:44:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.824 10:44:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.824 10:44:25 -- accel/accel.sh@42 -- # jq -r . 00:06:36.824 [2024-12-15 10:44:25.514990] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.824 [2024-12-15 10:44:25.515077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301919 ] 00:06:36.824 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.824 [2024-12-15 10:44:25.585500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.824 [2024-12-15 10:44:25.651724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val= 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val= 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val= 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val=0x1 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val= 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val= 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val=decompress 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val= 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val=software 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val=32 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val=32 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val=1 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val=Yes 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val= 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.824 10:44:25 -- accel/accel.sh@21 -- # val= 00:06:36.824 10:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:36.824 10:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:38.202 10:44:26 -- accel/accel.sh@21 -- # val= 00:06:38.202 10:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.202 10:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:38.202 10:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:38.202 10:44:26 -- accel/accel.sh@21 -- # val= 00:06:38.202 10:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.202 10:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:38.202 10:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:38.202 10:44:26 -- accel/accel.sh@21 -- # val= 00:06:38.202 10:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.202 10:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:38.202 10:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:38.202 10:44:26 -- accel/accel.sh@21 -- # val= 00:06:38.202 10:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.202 10:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:38.202 10:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:38.202 10:44:26 -- accel/accel.sh@21 -- # val= 00:06:38.202 10:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.202 10:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:38.202 10:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:38.202 10:44:26 -- accel/accel.sh@21 -- # val= 00:06:38.202 10:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.202 10:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:38.202 10:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:38.202 10:44:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.202 10:44:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:38.202 10:44:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.202 00:06:38.202 real 0m2.658s 00:06:38.202 user 0m2.384s 00:06:38.202 sys 0m0.274s 00:06:38.203 10:44:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.203 10:44:26 -- common/autotest_common.sh@10 -- # set +x 00:06:38.203 ************************************ 00:06:38.203 END TEST accel_decomp 00:06:38.203 ************************************ 00:06:38.203 10:44:26 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:38.203 10:44:26 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:38.203 10:44:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.203 10:44:26 -- common/autotest_common.sh@10 -- # set +x 00:06:38.203 ************************************ 00:06:38.203 START TEST accel_decmop_full 00:06:38.203 ************************************ 00:06:38.203 10:44:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:38.203 10:44:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.203 10:44:26 -- accel/accel.sh@17 -- # local accel_module 00:06:38.203 10:44:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:38.203 10:44:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:38.203 10:44:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.203 10:44:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.203 10:44:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.203 10:44:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.203 10:44:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.203 10:44:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.203 10:44:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.203 10:44:26 -- accel/accel.sh@42 -- # jq -r . 00:06:38.203 [2024-12-15 10:44:26.885088] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.203 [2024-12-15 10:44:26.885162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302204 ] 00:06:38.203 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.203 [2024-12-15 10:44:26.953386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.203 [2024-12-15 10:44:27.020707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.582 10:44:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:39.582 00:06:39.582 SPDK Configuration: 00:06:39.582 Core mask: 0x1 00:06:39.582 00:06:39.582 Accel Perf Configuration: 00:06:39.582 Workload Type: decompress 00:06:39.582 Transfer size: 111250 bytes 00:06:39.582 Vector count 1 00:06:39.582 Module: software 00:06:39.582 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:39.582 Queue depth: 32 00:06:39.582 Allocate depth: 32 00:06:39.582 # threads/core: 1 00:06:39.582 Run time: 1 seconds 00:06:39.582 Verify: Yes 00:06:39.582 00:06:39.582 Running for 1 seconds... 00:06:39.582 00:06:39.582 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.582 ------------------------------------------------------------------------------------ 00:06:39.582 0,0 5920/s 244 MiB/s 0 0 00:06:39.582 ==================================================================================== 00:06:39.582 Total 5920/s 628 MiB/s 0 0' 00:06:39.582 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.582 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.582 10:44:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:39.582 10:44:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:39.582 10:44:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.582 10:44:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.582 10:44:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.582 10:44:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.582 10:44:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.582 10:44:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.582 10:44:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.582 10:44:28 -- accel/accel.sh@42 -- # jq -r . 00:06:39.582 [2024-12-15 10:44:28.216306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.582 [2024-12-15 10:44:28.216395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302470 ] 00:06:39.582 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.583 [2024-12-15 10:44:28.285248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.583 [2024-12-15 10:44:28.350699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val= 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val= 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val= 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val=0x1 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val= 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val= 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val=decompress 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val= 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val=software 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val=32 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val=32 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val=1 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val=Yes 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val= 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:39.583 10:44:28 -- accel/accel.sh@21 -- # val= 00:06:39.583 10:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:39.583 10:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:40.521 10:44:29 -- accel/accel.sh@21 -- # val= 00:06:40.521 10:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.521 10:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:40.521 10:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:40.521 10:44:29 -- accel/accel.sh@21 -- # val= 00:06:40.521 10:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.521 10:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:40.521 10:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:40.521 10:44:29 -- accel/accel.sh@21 -- # val= 00:06:40.521 10:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.521 10:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:40.521 10:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:40.521 10:44:29 -- accel/accel.sh@21 -- # val= 00:06:40.521 10:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.521 10:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:40.521 10:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:40.521 10:44:29 -- accel/accel.sh@21 -- # val= 00:06:40.521 10:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.521 10:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:40.521 10:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:40.521 10:44:29 -- accel/accel.sh@21 -- # val= 00:06:40.521 10:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.521 10:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:40.521 10:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:40.521 10:44:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.521 10:44:29 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:40.521 10:44:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.521 00:06:40.521 real 0m2.667s 00:06:40.521 user 0m2.421s 00:06:40.521 sys 0m0.242s 00:06:40.521 10:44:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.521 10:44:29 -- common/autotest_common.sh@10 -- # set +x 00:06:40.521 ************************************ 00:06:40.521 END TEST accel_decmop_full 00:06:40.521 ************************************ 00:06:40.783 10:44:29 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.783 10:44:29 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:40.783 10:44:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.783 10:44:29 -- common/autotest_common.sh@10 -- # set +x 00:06:40.783 ************************************ 00:06:40.783 START TEST accel_decomp_mcore 00:06:40.783 ************************************ 00:06:40.783 10:44:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.783 10:44:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.783 10:44:29 -- accel/accel.sh@17 -- # local accel_module 00:06:40.783 10:44:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.783 10:44:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.783 10:44:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.783 10:44:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.783 10:44:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.783 10:44:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.783 10:44:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.783 10:44:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.783 10:44:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.783 10:44:29 -- accel/accel.sh@42 -- # jq -r . 00:06:40.783 [2024-12-15 10:44:29.597837] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.783 [2024-12-15 10:44:29.597924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302754 ] 00:06:40.783 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.783 [2024-12-15 10:44:29.669074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.783 [2024-12-15 10:44:29.738752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.783 [2024-12-15 10:44:29.738850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.783 [2024-12-15 10:44:29.738934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.783 [2024-12-15 10:44:29.738936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.164 10:44:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:42.164 00:06:42.164 SPDK Configuration: 00:06:42.164 Core mask: 0xf 00:06:42.164 00:06:42.164 Accel Perf Configuration: 00:06:42.164 Workload Type: decompress 00:06:42.164 Transfer size: 4096 bytes 00:06:42.164 Vector count 1 00:06:42.164 Module: software 00:06:42.164 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:42.164 Queue depth: 32 00:06:42.164 Allocate depth: 32 00:06:42.164 # threads/core: 1 00:06:42.164 Run time: 1 seconds 00:06:42.164 Verify: Yes 00:06:42.164 00:06:42.164 Running for 1 seconds... 00:06:42.164 00:06:42.164 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.164 ------------------------------------------------------------------------------------ 00:06:42.164 0,0 75392/s 138 MiB/s 0 0 00:06:42.164 3,0 76000/s 140 MiB/s 0 0 00:06:42.164 2,0 74784/s 137 MiB/s 0 0 00:06:42.164 1,0 75744/s 139 MiB/s 0 0 00:06:42.164 ==================================================================================== 00:06:42.164 Total 301920/s 1179 MiB/s 0 0' 00:06:42.164 10:44:30 -- accel/accel.sh@20 -- # IFS=: 00:06:42.164 10:44:30 -- accel/accel.sh@20 -- # read -r var val 00:06:42.164 10:44:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.164 10:44:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.164 10:44:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.164 10:44:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.164 10:44:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.164 10:44:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.164 10:44:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.164 10:44:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.164 10:44:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.164 10:44:30 -- accel/accel.sh@42 -- # jq -r . 00:06:42.164 [2024-12-15 10:44:30.939682] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.164 [2024-12-15 10:44:30.939777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303026 ] 00:06:42.164 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.164 [2024-12-15 10:44:31.008176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.164 [2024-12-15 10:44:31.076379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.164 [2024-12-15 10:44:31.076506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.164 [2024-12-15 10:44:31.076531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.164 [2024-12-15 10:44:31.076533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.164 10:44:31 -- accel/accel.sh@21 -- # val= 00:06:42.164 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.164 10:44:31 -- accel/accel.sh@21 -- # val= 00:06:42.164 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.164 10:44:31 -- accel/accel.sh@21 -- # val= 00:06:42.164 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.164 10:44:31 -- accel/accel.sh@21 -- # val=0xf 00:06:42.164 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.164 10:44:31 -- accel/accel.sh@21 -- # val= 00:06:42.164 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.164 10:44:31 -- accel/accel.sh@21 -- # val= 00:06:42.164 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.164 10:44:31 -- accel/accel.sh@21 -- # val=decompress 00:06:42.164 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.164 10:44:31 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.164 10:44:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.164 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.164 10:44:31 -- accel/accel.sh@21 -- # val= 00:06:42.164 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.164 10:44:31 -- accel/accel.sh@21 -- # val=software 00:06:42.164 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.164 10:44:31 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.164 10:44:31 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:42.164 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.164 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.164 10:44:31 -- accel/accel.sh@21 -- # val=32 00:06:42.165 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.165 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.165 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.165 10:44:31 -- accel/accel.sh@21 -- # val=32 00:06:42.165 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.165 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.165 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.165 10:44:31 -- accel/accel.sh@21 -- # val=1 00:06:42.165 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.165 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.165 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.165 10:44:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.165 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.165 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.165 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.165 10:44:31 -- accel/accel.sh@21 -- # val=Yes 00:06:42.165 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.165 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.165 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.165 10:44:31 -- accel/accel.sh@21 -- # val= 00:06:42.165 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.165 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.165 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.165 10:44:31 -- accel/accel.sh@21 -- # val= 00:06:42.165 10:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.165 10:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.165 10:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:43.618 10:44:32 -- accel/accel.sh@21 -- # val= 00:06:43.618 10:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # IFS=: 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # read -r var val 00:06:43.618 10:44:32 -- accel/accel.sh@21 -- # val= 00:06:43.618 10:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # IFS=: 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # read -r var val 00:06:43.618 10:44:32 -- accel/accel.sh@21 -- # val= 00:06:43.618 10:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # IFS=: 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # read -r var val 00:06:43.618 10:44:32 -- accel/accel.sh@21 -- # val= 00:06:43.618 10:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # IFS=: 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # read -r var val 00:06:43.618 10:44:32 -- accel/accel.sh@21 -- # val= 00:06:43.618 10:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # IFS=: 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # read -r var val 00:06:43.618 10:44:32 -- accel/accel.sh@21 -- # val= 00:06:43.618 10:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # IFS=: 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # read -r var val 00:06:43.618 10:44:32 -- accel/accel.sh@21 -- # val= 00:06:43.618 10:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # IFS=: 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # read -r var val 00:06:43.618 10:44:32 -- accel/accel.sh@21 -- # val= 00:06:43.618 10:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # IFS=: 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # read -r var val 00:06:43.618 10:44:32 -- accel/accel.sh@21 -- # val= 00:06:43.618 10:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # IFS=: 00:06:43.618 10:44:32 -- accel/accel.sh@20 -- # read -r var val 00:06:43.618 10:44:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.618 10:44:32 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:43.618 10:44:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.618 00:06:43.618 real 0m2.690s 00:06:43.618 user 0m9.084s 00:06:43.618 sys 0m0.274s 00:06:43.618 10:44:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.618 10:44:32 -- common/autotest_common.sh@10 -- # set +x 00:06:43.618 ************************************ 00:06:43.618 END TEST accel_decomp_mcore 00:06:43.618 ************************************ 00:06:43.618 10:44:32 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.618 10:44:32 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:43.618 10:44:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.618 10:44:32 -- common/autotest_common.sh@10 -- # set +x 00:06:43.618 ************************************ 00:06:43.618 START TEST accel_decomp_full_mcore 00:06:43.618 ************************************ 00:06:43.618 10:44:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.618 10:44:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.619 10:44:32 -- accel/accel.sh@17 -- # local accel_module 00:06:43.619 10:44:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.619 10:44:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.619 10:44:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.619 10:44:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.619 10:44:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.619 10:44:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.619 10:44:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.619 10:44:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.619 10:44:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.619 10:44:32 -- accel/accel.sh@42 -- # jq -r . 00:06:43.619 [2024-12-15 10:44:32.333652] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.619 [2024-12-15 10:44:32.333739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303321 ] 00:06:43.619 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.619 [2024-12-15 10:44:32.404405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:43.619 [2024-12-15 10:44:32.475257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.619 [2024-12-15 10:44:32.475351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.619 [2024-12-15 10:44:32.475422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.619 [2024-12-15 10:44:32.475424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.998 10:44:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:44.998 00:06:44.998 SPDK Configuration: 00:06:44.998 Core mask: 0xf 00:06:44.998 00:06:44.998 Accel Perf Configuration: 00:06:44.998 Workload Type: decompress 00:06:44.998 Transfer size: 111250 bytes 00:06:44.998 Vector count 1 00:06:44.998 Module: software 00:06:44.998 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:44.998 Queue depth: 32 00:06:44.998 Allocate depth: 32 00:06:44.998 # threads/core: 1 00:06:44.998 Run time: 1 seconds 00:06:44.998 Verify: Yes 00:06:44.998 00:06:44.998 Running for 1 seconds... 00:06:44.998 00:06:44.998 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.998 ------------------------------------------------------------------------------------ 00:06:44.998 0,0 5792/s 239 MiB/s 0 0 00:06:44.998 3,0 5824/s 240 MiB/s 0 0 00:06:44.998 2,0 5824/s 240 MiB/s 0 0 00:06:44.998 1,0 5824/s 240 MiB/s 0 0 00:06:44.998 ==================================================================================== 00:06:44.998 Total 23264/s 2468 MiB/s 0 0' 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:44.998 10:44:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:44.998 10:44:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.998 10:44:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.998 10:44:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.998 10:44:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.998 10:44:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.998 10:44:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.998 10:44:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.998 10:44:33 -- accel/accel.sh@42 -- # jq -r . 00:06:44.998 [2024-12-15 10:44:33.684104] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.998 [2024-12-15 10:44:33.684209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303550 ] 00:06:44.998 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.998 [2024-12-15 10:44:33.753139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.998 [2024-12-15 10:44:33.821229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.998 [2024-12-15 10:44:33.821323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.998 [2024-12-15 10:44:33.821406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.998 [2024-12-15 10:44:33.821408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val= 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val= 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val= 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val=0xf 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val= 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val= 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val=decompress 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val= 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val=software 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val=32 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val=32 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val=1 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val=Yes 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val= 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.998 10:44:33 -- accel/accel.sh@21 -- # val= 00:06:44.998 10:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.998 10:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:46.377 10:44:35 -- accel/accel.sh@21 -- # val= 00:06:46.377 10:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.377 10:44:35 -- accel/accel.sh@21 -- # val= 00:06:46.377 10:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.377 10:44:35 -- accel/accel.sh@21 -- # val= 00:06:46.377 10:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.377 10:44:35 -- accel/accel.sh@21 -- # val= 00:06:46.377 10:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.377 10:44:35 -- accel/accel.sh@21 -- # val= 00:06:46.377 10:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.377 10:44:35 -- accel/accel.sh@21 -- # val= 00:06:46.377 10:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.377 10:44:35 -- accel/accel.sh@21 -- # val= 00:06:46.377 10:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.377 10:44:35 -- accel/accel.sh@21 -- # val= 00:06:46.377 10:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.377 10:44:35 -- accel/accel.sh@21 -- # val= 00:06:46.377 10:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.377 10:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.377 10:44:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.377 10:44:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:46.377 10:44:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.377 00:06:46.377 real 0m2.705s 00:06:46.377 user 0m9.122s 00:06:46.377 sys 0m0.289s 00:06:46.377 10:44:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.377 10:44:35 -- common/autotest_common.sh@10 -- # set +x 00:06:46.377 ************************************ 00:06:46.377 END TEST accel_decomp_full_mcore 00:06:46.377 ************************************ 00:06:46.377 10:44:35 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:46.377 10:44:35 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:46.377 10:44:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.377 10:44:35 -- common/autotest_common.sh@10 -- # set +x 00:06:46.377 ************************************ 00:06:46.377 START TEST accel_decomp_mthread 00:06:46.377 ************************************ 00:06:46.377 10:44:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:46.377 10:44:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.377 10:44:35 -- accel/accel.sh@17 -- # local accel_module 00:06:46.377 10:44:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:46.377 10:44:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:46.377 10:44:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.377 10:44:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.377 10:44:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.377 10:44:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.377 10:44:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.377 10:44:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.377 10:44:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.377 10:44:35 -- accel/accel.sh@42 -- # jq -r . 00:06:46.377 [2024-12-15 10:44:35.087486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.377 [2024-12-15 10:44:35.087575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303769 ] 00:06:46.377 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.377 [2024-12-15 10:44:35.157159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.377 [2024-12-15 10:44:35.224971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.755 10:44:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:47.755 00:06:47.755 SPDK Configuration: 00:06:47.755 Core mask: 0x1 00:06:47.755 00:06:47.755 Accel Perf Configuration: 00:06:47.755 Workload Type: decompress 00:06:47.755 Transfer size: 4096 bytes 00:06:47.755 Vector count 1 00:06:47.755 Module: software 00:06:47.755 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:47.755 Queue depth: 32 00:06:47.755 Allocate depth: 32 00:06:47.755 # threads/core: 2 00:06:47.755 Run time: 1 seconds 00:06:47.755 Verify: Yes 00:06:47.755 00:06:47.755 Running for 1 seconds... 00:06:47.755 00:06:47.755 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.755 ------------------------------------------------------------------------------------ 00:06:47.755 0,1 46560/s 85 MiB/s 0 0 00:06:47.755 0,0 46400/s 85 MiB/s 0 0 00:06:47.755 ==================================================================================== 00:06:47.755 Total 92960/s 363 MiB/s 0 0' 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.755 10:44:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:47.755 10:44:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:47.755 10:44:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.755 10:44:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.755 10:44:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.755 10:44:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.755 10:44:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.755 10:44:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.755 10:44:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.755 10:44:36 -- accel/accel.sh@42 -- # jq -r . 00:06:47.755 [2024-12-15 10:44:36.419699] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.755 [2024-12-15 10:44:36.419788] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303932 ] 00:06:47.755 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.755 [2024-12-15 10:44:36.488711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.755 [2024-12-15 10:44:36.554139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.755 10:44:36 -- accel/accel.sh@21 -- # val= 00:06:47.755 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.755 10:44:36 -- accel/accel.sh@21 -- # val= 00:06:47.755 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.755 10:44:36 -- accel/accel.sh@21 -- # val= 00:06:47.755 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.755 10:44:36 -- accel/accel.sh@21 -- # val=0x1 00:06:47.755 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.755 10:44:36 -- accel/accel.sh@21 -- # val= 00:06:47.755 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.755 10:44:36 -- accel/accel.sh@21 -- # val= 00:06:47.755 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.755 10:44:36 -- accel/accel.sh@21 -- # val=decompress 00:06:47.755 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.755 10:44:36 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.755 10:44:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.755 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.755 10:44:36 -- accel/accel.sh@21 -- # val= 00:06:47.755 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.755 10:44:36 -- accel/accel.sh@21 -- # val=software 00:06:47.755 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.755 10:44:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.755 10:44:36 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:47.755 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.755 10:44:36 -- accel/accel.sh@21 -- # val=32 00:06:47.755 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.755 10:44:36 -- accel/accel.sh@21 -- # val=32 00:06:47.755 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.755 10:44:36 -- accel/accel.sh@21 -- # val=2 00:06:47.755 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.755 10:44:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.755 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.755 10:44:36 -- accel/accel.sh@21 -- # val=Yes 00:06:47.755 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.755 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.756 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.756 10:44:36 -- accel/accel.sh@21 -- # val= 00:06:47.756 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.756 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.756 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:47.756 10:44:36 -- accel/accel.sh@21 -- # val= 00:06:47.756 10:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.756 10:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:47.756 10:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:49.135 10:44:37 -- accel/accel.sh@21 -- # val= 00:06:49.135 10:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.135 10:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:49.135 10:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:49.135 10:44:37 -- accel/accel.sh@21 -- # val= 00:06:49.135 10:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.135 10:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:49.135 10:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:49.135 10:44:37 -- accel/accel.sh@21 -- # val= 00:06:49.135 10:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.135 10:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:49.135 10:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:49.135 10:44:37 -- accel/accel.sh@21 -- # val= 00:06:49.135 10:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.135 10:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:49.135 10:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:49.135 10:44:37 -- accel/accel.sh@21 -- # val= 00:06:49.135 10:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.135 10:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:49.135 10:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:49.135 10:44:37 -- accel/accel.sh@21 -- # val= 00:06:49.135 10:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.135 10:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:49.135 10:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:49.135 10:44:37 -- accel/accel.sh@21 -- # val= 00:06:49.135 10:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.135 10:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:49.135 10:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:49.135 10:44:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.135 10:44:37 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:49.135 10:44:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.135 00:06:49.135 real 0m2.669s 00:06:49.135 user 0m2.421s 00:06:49.135 sys 0m0.258s 00:06:49.135 10:44:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.135 10:44:37 -- common/autotest_common.sh@10 -- # set +x 00:06:49.135 ************************************ 00:06:49.135 END TEST accel_decomp_mthread 00:06:49.135 ************************************ 00:06:49.135 10:44:37 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.135 10:44:37 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:49.135 10:44:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.135 10:44:37 -- common/autotest_common.sh@10 -- # set +x 00:06:49.135 ************************************ 00:06:49.135 START TEST accel_deomp_full_mthread 00:06:49.135 ************************************ 00:06:49.135 10:44:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.135 10:44:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.135 10:44:37 -- accel/accel.sh@17 -- # local accel_module 00:06:49.135 10:44:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.135 10:44:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.135 10:44:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.135 10:44:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.135 10:44:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.135 10:44:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.135 10:44:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.135 10:44:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.135 10:44:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.135 10:44:37 -- accel/accel.sh@42 -- # jq -r . 00:06:49.135 [2024-12-15 10:44:37.805450] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.135 [2024-12-15 10:44:37.805540] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304185 ] 00:06:49.135 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.135 [2024-12-15 10:44:37.875469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.135 [2024-12-15 10:44:37.943257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.514 10:44:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:50.514 00:06:50.514 SPDK Configuration: 00:06:50.514 Core mask: 0x1 00:06:50.514 00:06:50.514 Accel Perf Configuration: 00:06:50.514 Workload Type: decompress 00:06:50.514 Transfer size: 111250 bytes 00:06:50.514 Vector count 1 00:06:50.514 Module: software 00:06:50.514 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:50.514 Queue depth: 32 00:06:50.514 Allocate depth: 32 00:06:50.514 # threads/core: 2 00:06:50.514 Run time: 1 seconds 00:06:50.514 Verify: Yes 00:06:50.514 00:06:50.514 Running for 1 seconds... 00:06:50.514 00:06:50.514 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.514 ------------------------------------------------------------------------------------ 00:06:50.514 0,1 3040/s 125 MiB/s 0 0 00:06:50.514 0,0 2976/s 122 MiB/s 0 0 00:06:50.514 ==================================================================================== 00:06:50.514 Total 6016/s 638 MiB/s 0 0' 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.514 10:44:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:50.514 10:44:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:50.514 10:44:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.514 10:44:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.514 10:44:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.514 10:44:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.514 10:44:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.514 10:44:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.514 10:44:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.514 10:44:39 -- accel/accel.sh@42 -- # jq -r . 00:06:50.514 [2024-12-15 10:44:39.158375] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.514 [2024-12-15 10:44:39.158472] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304457 ] 00:06:50.514 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.514 [2024-12-15 10:44:39.228952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.514 [2024-12-15 10:44:39.296371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.514 10:44:39 -- accel/accel.sh@21 -- # val= 00:06:50.514 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.514 10:44:39 -- accel/accel.sh@21 -- # val= 00:06:50.514 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.514 10:44:39 -- accel/accel.sh@21 -- # val= 00:06:50.514 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.514 10:44:39 -- accel/accel.sh@21 -- # val=0x1 00:06:50.514 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.514 10:44:39 -- accel/accel.sh@21 -- # val= 00:06:50.514 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.514 10:44:39 -- accel/accel.sh@21 -- # val= 00:06:50.514 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.514 10:44:39 -- accel/accel.sh@21 -- # val=decompress 00:06:50.514 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.514 10:44:39 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.514 10:44:39 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:50.514 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.514 10:44:39 -- accel/accel.sh@21 -- # val= 00:06:50.514 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.514 10:44:39 -- accel/accel.sh@21 -- # val=software 00:06:50.514 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.514 10:44:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.514 10:44:39 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:50.514 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.514 10:44:39 -- accel/accel.sh@21 -- # val=32 00:06:50.514 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.514 10:44:39 -- accel/accel.sh@21 -- # val=32 00:06:50.514 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.514 10:44:39 -- accel/accel.sh@21 -- # val=2 00:06:50.514 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.514 10:44:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.514 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.514 10:44:39 -- accel/accel.sh@21 -- # val=Yes 00:06:50.514 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.514 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.515 10:44:39 -- accel/accel.sh@21 -- # val= 00:06:50.515 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.515 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.515 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.515 10:44:39 -- accel/accel.sh@21 -- # val= 00:06:50.515 10:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.515 10:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.515 10:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:51.893 10:44:40 -- accel/accel.sh@21 -- # val= 00:06:51.893 10:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.893 10:44:40 -- accel/accel.sh@20 -- # IFS=: 00:06:51.893 10:44:40 -- accel/accel.sh@20 -- # read -r var val 00:06:51.893 10:44:40 -- accel/accel.sh@21 -- # val= 00:06:51.893 10:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.893 10:44:40 -- accel/accel.sh@20 -- # IFS=: 00:06:51.893 10:44:40 -- accel/accel.sh@20 -- # read -r var val 00:06:51.893 10:44:40 -- accel/accel.sh@21 -- # val= 00:06:51.893 10:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.893 10:44:40 -- accel/accel.sh@20 -- # IFS=: 00:06:51.893 10:44:40 -- accel/accel.sh@20 -- # read -r var val 00:06:51.893 10:44:40 -- accel/accel.sh@21 -- # val= 00:06:51.893 10:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.893 10:44:40 -- accel/accel.sh@20 -- # IFS=: 00:06:51.893 10:44:40 -- accel/accel.sh@20 -- # read -r var val 00:06:51.893 10:44:40 -- accel/accel.sh@21 -- # val= 00:06:51.893 10:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.893 10:44:40 -- accel/accel.sh@20 -- # IFS=: 00:06:51.893 10:44:40 -- accel/accel.sh@20 -- # read -r var val 00:06:51.893 10:44:40 -- accel/accel.sh@21 -- # val= 00:06:51.893 10:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.893 10:44:40 -- accel/accel.sh@20 -- # IFS=: 00:06:51.893 10:44:40 -- accel/accel.sh@20 -- # read -r var val 00:06:51.893 10:44:40 -- accel/accel.sh@21 -- # val= 00:06:51.893 10:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.893 10:44:40 -- accel/accel.sh@20 -- # IFS=: 00:06:51.893 10:44:40 -- accel/accel.sh@20 -- # read -r var val 00:06:51.893 10:44:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.893 10:44:40 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:51.893 10:44:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.893 00:06:51.893 real 0m2.713s 00:06:51.893 user 0m2.458s 00:06:51.893 sys 0m0.264s 00:06:51.893 10:44:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.893 10:44:40 -- common/autotest_common.sh@10 -- # set +x 00:06:51.893 ************************************ 00:06:51.893 END TEST accel_deomp_full_mthread 00:06:51.893 ************************************ 00:06:51.893 10:44:40 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:51.893 10:44:40 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:51.893 10:44:40 -- accel/accel.sh@129 -- # build_accel_config 00:06:51.893 10:44:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:51.893 10:44:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.893 10:44:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.894 10:44:40 -- common/autotest_common.sh@10 -- # set +x 00:06:51.894 10:44:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.894 10:44:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.894 10:44:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.894 10:44:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.894 10:44:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.894 10:44:40 -- accel/accel.sh@42 -- # jq -r . 00:06:51.894 ************************************ 00:06:51.894 START TEST accel_dif_functional_tests 00:06:51.894 ************************************ 00:06:51.894 10:44:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:51.894 [2024-12-15 10:44:40.569258] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.894 [2024-12-15 10:44:40.569349] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304746 ] 00:06:51.894 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.894 [2024-12-15 10:44:40.637073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.894 [2024-12-15 10:44:40.706088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.894 [2024-12-15 10:44:40.706183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.894 [2024-12-15 10:44:40.706186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.894 00:06:51.894 00:06:51.894 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.894 http://cunit.sourceforge.net/ 00:06:51.894 00:06:51.894 00:06:51.894 Suite: accel_dif 00:06:51.894 Test: verify: DIF generated, GUARD check ...passed 00:06:51.894 Test: verify: DIF generated, APPTAG check ...passed 00:06:51.894 Test: verify: DIF generated, REFTAG check ...passed 00:06:51.894 Test: verify: DIF not generated, GUARD check ...[2024-12-15 10:44:40.775059] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:51.894 [2024-12-15 10:44:40.775107] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:51.894 passed 00:06:51.894 Test: verify: DIF not generated, APPTAG check ...[2024-12-15 10:44:40.775141] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:51.894 [2024-12-15 10:44:40.775160] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:51.894 passed 00:06:51.894 Test: verify: DIF not generated, REFTAG check ...[2024-12-15 10:44:40.775180] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:51.894 [2024-12-15 10:44:40.775198] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:51.894 passed 00:06:51.894 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:51.894 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-15 10:44:40.775242] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:51.894 passed 00:06:51.894 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:51.894 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:51.894 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:51.894 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-15 10:44:40.775342] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:51.894 passed 00:06:51.894 Test: generate copy: DIF generated, GUARD check ...passed 00:06:51.894 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:51.894 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:51.894 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:51.894 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:51.894 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:51.894 Test: generate copy: iovecs-len validate ...[2024-12-15 10:44:40.775533] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:51.894 passed 00:06:51.894 Test: generate copy: buffer alignment validate ...passed 00:06:51.894 00:06:51.894 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.894 suites 1 1 n/a 0 0 00:06:51.894 tests 20 20 20 0 0 00:06:51.894 asserts 204 204 204 0 n/a 00:06:51.894 00:06:51.894 Elapsed time = 0.002 seconds 00:06:52.153 00:06:52.153 real 0m0.388s 00:06:52.153 user 0m0.583s 00:06:52.153 sys 0m0.156s 00:06:52.153 10:44:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.153 10:44:40 -- common/autotest_common.sh@10 -- # set +x 00:06:52.153 ************************************ 00:06:52.153 END TEST accel_dif_functional_tests 00:06:52.153 ************************************ 00:06:52.153 00:06:52.153 real 0m56.975s 00:06:52.153 user 1m4.523s 00:06:52.153 sys 0m7.030s 00:06:52.153 10:44:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.153 10:44:40 -- common/autotest_common.sh@10 -- # set +x 00:06:52.153 ************************************ 00:06:52.153 END TEST accel 00:06:52.153 ************************************ 00:06:52.153 10:44:41 -- spdk/autotest.sh@177 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:52.153 10:44:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:52.153 10:44:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.153 10:44:41 -- common/autotest_common.sh@10 -- # set +x 00:06:52.153 ************************************ 00:06:52.153 START TEST accel_rpc 00:06:52.153 ************************************ 00:06:52.153 10:44:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:52.153 * Looking for test storage... 00:06:52.153 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:52.153 10:44:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:52.153 10:44:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:52.153 10:44:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:52.153 10:44:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:52.153 10:44:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:52.153 10:44:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:52.153 10:44:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:52.153 10:44:41 -- scripts/common.sh@335 -- # IFS=.-: 00:06:52.153 10:44:41 -- scripts/common.sh@335 -- # read -ra ver1 00:06:52.153 10:44:41 -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.153 10:44:41 -- scripts/common.sh@336 -- # read -ra ver2 00:06:52.153 10:44:41 -- scripts/common.sh@337 -- # local 'op=<' 00:06:52.153 10:44:41 -- scripts/common.sh@339 -- # ver1_l=2 00:06:52.153 10:44:41 -- scripts/common.sh@340 -- # ver2_l=1 00:06:52.153 10:44:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:52.153 10:44:41 -- scripts/common.sh@343 -- # case "$op" in 00:06:52.153 10:44:41 -- scripts/common.sh@344 -- # : 1 00:06:52.153 10:44:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:52.153 10:44:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.412 10:44:41 -- scripts/common.sh@364 -- # decimal 1 00:06:52.412 10:44:41 -- scripts/common.sh@352 -- # local d=1 00:06:52.412 10:44:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.412 10:44:41 -- scripts/common.sh@354 -- # echo 1 00:06:52.412 10:44:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:52.412 10:44:41 -- scripts/common.sh@365 -- # decimal 2 00:06:52.412 10:44:41 -- scripts/common.sh@352 -- # local d=2 00:06:52.412 10:44:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.412 10:44:41 -- scripts/common.sh@354 -- # echo 2 00:06:52.412 10:44:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:52.412 10:44:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:52.412 10:44:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:52.412 10:44:41 -- scripts/common.sh@367 -- # return 0 00:06:52.412 10:44:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.412 10:44:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:52.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.412 --rc genhtml_branch_coverage=1 00:06:52.412 --rc genhtml_function_coverage=1 00:06:52.412 --rc genhtml_legend=1 00:06:52.412 --rc geninfo_all_blocks=1 00:06:52.412 --rc geninfo_unexecuted_blocks=1 00:06:52.412 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:52.412 ' 00:06:52.412 10:44:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:52.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.412 --rc genhtml_branch_coverage=1 00:06:52.412 --rc genhtml_function_coverage=1 00:06:52.412 --rc genhtml_legend=1 00:06:52.412 --rc geninfo_all_blocks=1 00:06:52.412 --rc geninfo_unexecuted_blocks=1 00:06:52.412 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:52.412 ' 00:06:52.412 10:44:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:52.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.412 --rc genhtml_branch_coverage=1 00:06:52.412 --rc genhtml_function_coverage=1 00:06:52.412 --rc genhtml_legend=1 00:06:52.412 --rc geninfo_all_blocks=1 00:06:52.412 --rc geninfo_unexecuted_blocks=1 00:06:52.412 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:52.412 ' 00:06:52.412 10:44:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:52.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.412 --rc genhtml_branch_coverage=1 00:06:52.412 --rc genhtml_function_coverage=1 00:06:52.412 --rc genhtml_legend=1 00:06:52.412 --rc geninfo_all_blocks=1 00:06:52.412 --rc geninfo_unexecuted_blocks=1 00:06:52.412 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:52.412 ' 00:06:52.412 10:44:41 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:52.412 10:44:41 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1304934 00:06:52.412 10:44:41 -- accel/accel_rpc.sh@15 -- # waitforlisten 1304934 00:06:52.412 10:44:41 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:52.412 10:44:41 -- common/autotest_common.sh@829 -- # '[' -z 1304934 ']' 00:06:52.412 10:44:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.413 10:44:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.413 10:44:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.413 10:44:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.413 10:44:41 -- common/autotest_common.sh@10 -- # set +x 00:06:52.413 [2024-12-15 10:44:41.191682] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.413 [2024-12-15 10:44:41.191746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304934 ] 00:06:52.413 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.413 [2024-12-15 10:44:41.258353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.413 [2024-12-15 10:44:41.333003] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:52.413 [2024-12-15 10:44:41.333115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.413 10:44:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.413 10:44:41 -- common/autotest_common.sh@862 -- # return 0 00:06:52.413 10:44:41 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:52.413 10:44:41 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:52.413 10:44:41 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:52.413 10:44:41 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:52.413 10:44:41 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:52.413 10:44:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:52.413 10:44:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.413 10:44:41 -- common/autotest_common.sh@10 -- # set +x 00:06:52.413 ************************************ 00:06:52.413 START TEST accel_assign_opcode 00:06:52.413 ************************************ 00:06:52.413 10:44:41 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:06:52.413 10:44:41 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:52.413 10:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.413 10:44:41 -- common/autotest_common.sh@10 -- # set +x 00:06:52.413 [2024-12-15 10:44:41.365515] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:52.413 10:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.413 10:44:41 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:52.413 10:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.413 10:44:41 -- common/autotest_common.sh@10 -- # set +x 00:06:52.413 [2024-12-15 10:44:41.373527] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:52.413 10:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.413 10:44:41 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:52.413 10:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.413 10:44:41 -- common/autotest_common.sh@10 -- # set +x 00:06:52.672 10:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.672 10:44:41 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:52.672 10:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.672 10:44:41 -- common/autotest_common.sh@10 -- # set +x 00:06:52.672 10:44:41 -- accel/accel_rpc.sh@42 -- # grep software 00:06:52.672 10:44:41 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:52.672 10:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.672 software 00:06:52.672 00:06:52.672 real 0m0.228s 00:06:52.672 user 0m0.037s 00:06:52.672 sys 0m0.008s 00:06:52.672 10:44:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.672 10:44:41 -- common/autotest_common.sh@10 -- # set +x 00:06:52.672 ************************************ 00:06:52.672 END TEST accel_assign_opcode 00:06:52.672 ************************************ 00:06:52.672 10:44:41 -- accel/accel_rpc.sh@55 -- # killprocess 1304934 00:06:52.672 10:44:41 -- common/autotest_common.sh@936 -- # '[' -z 1304934 ']' 00:06:52.672 10:44:41 -- common/autotest_common.sh@940 -- # kill -0 1304934 00:06:52.672 10:44:41 -- common/autotest_common.sh@941 -- # uname 00:06:52.672 10:44:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:52.672 10:44:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1304934 00:06:52.931 10:44:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:52.931 10:44:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:52.931 10:44:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1304934' 00:06:52.931 killing process with pid 1304934 00:06:52.931 10:44:41 -- common/autotest_common.sh@955 -- # kill 1304934 00:06:52.931 10:44:41 -- common/autotest_common.sh@960 -- # wait 1304934 00:06:53.191 00:06:53.191 real 0m0.971s 00:06:53.191 user 0m0.897s 00:06:53.191 sys 0m0.410s 00:06:53.191 10:44:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.191 10:44:41 -- common/autotest_common.sh@10 -- # set +x 00:06:53.191 ************************************ 00:06:53.191 END TEST accel_rpc 00:06:53.191 ************************************ 00:06:53.191 10:44:42 -- spdk/autotest.sh@178 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:53.191 10:44:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:53.191 10:44:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.191 10:44:42 -- common/autotest_common.sh@10 -- # set +x 00:06:53.191 ************************************ 00:06:53.191 START TEST app_cmdline 00:06:53.191 ************************************ 00:06:53.191 10:44:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:53.191 * Looking for test storage... 00:06:53.191 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:53.191 10:44:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:53.191 10:44:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:53.191 10:44:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:53.191 10:44:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:53.191 10:44:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:53.191 10:44:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:53.191 10:44:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:53.191 10:44:42 -- scripts/common.sh@335 -- # IFS=.-: 00:06:53.191 10:44:42 -- scripts/common.sh@335 -- # read -ra ver1 00:06:53.191 10:44:42 -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.191 10:44:42 -- scripts/common.sh@336 -- # read -ra ver2 00:06:53.191 10:44:42 -- scripts/common.sh@337 -- # local 'op=<' 00:06:53.191 10:44:42 -- scripts/common.sh@339 -- # ver1_l=2 00:06:53.191 10:44:42 -- scripts/common.sh@340 -- # ver2_l=1 00:06:53.191 10:44:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:53.191 10:44:42 -- scripts/common.sh@343 -- # case "$op" in 00:06:53.191 10:44:42 -- scripts/common.sh@344 -- # : 1 00:06:53.191 10:44:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:53.191 10:44:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.191 10:44:42 -- scripts/common.sh@364 -- # decimal 1 00:06:53.191 10:44:42 -- scripts/common.sh@352 -- # local d=1 00:06:53.191 10:44:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.191 10:44:42 -- scripts/common.sh@354 -- # echo 1 00:06:53.191 10:44:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:53.191 10:44:42 -- scripts/common.sh@365 -- # decimal 2 00:06:53.451 10:44:42 -- scripts/common.sh@352 -- # local d=2 00:06:53.451 10:44:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.451 10:44:42 -- scripts/common.sh@354 -- # echo 2 00:06:53.451 10:44:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:53.451 10:44:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:53.451 10:44:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:53.451 10:44:42 -- scripts/common.sh@367 -- # return 0 00:06:53.451 10:44:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.451 10:44:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:53.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.451 --rc genhtml_branch_coverage=1 00:06:53.451 --rc genhtml_function_coverage=1 00:06:53.451 --rc genhtml_legend=1 00:06:53.451 --rc geninfo_all_blocks=1 00:06:53.451 --rc geninfo_unexecuted_blocks=1 00:06:53.451 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:53.451 ' 00:06:53.451 10:44:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:53.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.451 --rc genhtml_branch_coverage=1 00:06:53.451 --rc genhtml_function_coverage=1 00:06:53.451 --rc genhtml_legend=1 00:06:53.451 --rc geninfo_all_blocks=1 00:06:53.451 --rc geninfo_unexecuted_blocks=1 00:06:53.451 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:53.451 ' 00:06:53.451 10:44:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:53.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.451 --rc genhtml_branch_coverage=1 00:06:53.451 --rc genhtml_function_coverage=1 00:06:53.451 --rc genhtml_legend=1 00:06:53.451 --rc geninfo_all_blocks=1 00:06:53.451 --rc geninfo_unexecuted_blocks=1 00:06:53.451 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:53.451 ' 00:06:53.451 10:44:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:53.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.451 --rc genhtml_branch_coverage=1 00:06:53.451 --rc genhtml_function_coverage=1 00:06:53.451 --rc genhtml_legend=1 00:06:53.451 --rc geninfo_all_blocks=1 00:06:53.451 --rc geninfo_unexecuted_blocks=1 00:06:53.451 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:53.451 ' 00:06:53.451 10:44:42 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:53.451 10:44:42 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1305157 00:06:53.451 10:44:42 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:53.451 10:44:42 -- app/cmdline.sh@18 -- # waitforlisten 1305157 00:06:53.451 10:44:42 -- common/autotest_common.sh@829 -- # '[' -z 1305157 ']' 00:06:53.451 10:44:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.451 10:44:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.451 10:44:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.451 10:44:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.451 10:44:42 -- common/autotest_common.sh@10 -- # set +x 00:06:53.451 [2024-12-15 10:44:42.233389] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.451 [2024-12-15 10:44:42.233464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1305157 ] 00:06:53.451 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.451 [2024-12-15 10:44:42.301477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.451 [2024-12-15 10:44:42.369501] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:53.451 [2024-12-15 10:44:42.369615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.388 10:44:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.388 10:44:43 -- common/autotest_common.sh@862 -- # return 0 00:06:54.388 10:44:43 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:54.388 { 00:06:54.388 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:06:54.388 "fields": { 00:06:54.388 "major": 24, 00:06:54.388 "minor": 1, 00:06:54.388 "patch": 1, 00:06:54.388 "suffix": "-pre", 00:06:54.388 "commit": "c13c99a5e" 00:06:54.388 } 00:06:54.388 } 00:06:54.388 10:44:43 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:54.388 10:44:43 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:54.388 10:44:43 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:54.388 10:44:43 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:54.388 10:44:43 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:54.388 10:44:43 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:54.388 10:44:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.388 10:44:43 -- common/autotest_common.sh@10 -- # set +x 00:06:54.388 10:44:43 -- app/cmdline.sh@26 -- # sort 00:06:54.388 10:44:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.388 10:44:43 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:54.388 10:44:43 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:54.388 10:44:43 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.388 10:44:43 -- common/autotest_common.sh@650 -- # local es=0 00:06:54.388 10:44:43 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.388 10:44:43 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:54.388 10:44:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.388 10:44:43 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:54.388 10:44:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.388 10:44:43 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:54.388 10:44:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.388 10:44:43 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:54.388 10:44:43 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:54.388 10:44:43 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.648 request: 00:06:54.648 { 00:06:54.648 "method": "env_dpdk_get_mem_stats", 00:06:54.648 "req_id": 1 00:06:54.648 } 00:06:54.648 Got JSON-RPC error response 00:06:54.648 response: 00:06:54.648 { 00:06:54.648 "code": -32601, 00:06:54.648 "message": "Method not found" 00:06:54.648 } 00:06:54.648 10:44:43 -- common/autotest_common.sh@653 -- # es=1 00:06:54.648 10:44:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.648 10:44:43 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:54.648 10:44:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.648 10:44:43 -- app/cmdline.sh@1 -- # killprocess 1305157 00:06:54.648 10:44:43 -- common/autotest_common.sh@936 -- # '[' -z 1305157 ']' 00:06:54.648 10:44:43 -- common/autotest_common.sh@940 -- # kill -0 1305157 00:06:54.648 10:44:43 -- common/autotest_common.sh@941 -- # uname 00:06:54.648 10:44:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:54.648 10:44:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1305157 00:06:54.648 10:44:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:54.648 10:44:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:54.648 10:44:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1305157' 00:06:54.648 killing process with pid 1305157 00:06:54.648 10:44:43 -- common/autotest_common.sh@955 -- # kill 1305157 00:06:54.648 10:44:43 -- common/autotest_common.sh@960 -- # wait 1305157 00:06:54.906 00:06:54.907 real 0m1.849s 00:06:54.907 user 0m2.200s 00:06:54.907 sys 0m0.506s 00:06:54.907 10:44:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.907 10:44:43 -- common/autotest_common.sh@10 -- # set +x 00:06:54.907 ************************************ 00:06:54.907 END TEST app_cmdline 00:06:54.907 ************************************ 00:06:55.166 10:44:43 -- spdk/autotest.sh@179 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:55.166 10:44:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:55.166 10:44:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.166 10:44:43 -- common/autotest_common.sh@10 -- # set +x 00:06:55.166 ************************************ 00:06:55.166 START TEST version 00:06:55.166 ************************************ 00:06:55.166 10:44:43 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:55.166 * Looking for test storage... 00:06:55.166 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:55.166 10:44:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:55.166 10:44:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:55.166 10:44:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:55.166 10:44:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:55.166 10:44:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:55.166 10:44:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:55.166 10:44:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:55.166 10:44:44 -- scripts/common.sh@335 -- # IFS=.-: 00:06:55.166 10:44:44 -- scripts/common.sh@335 -- # read -ra ver1 00:06:55.166 10:44:44 -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.166 10:44:44 -- scripts/common.sh@336 -- # read -ra ver2 00:06:55.166 10:44:44 -- scripts/common.sh@337 -- # local 'op=<' 00:06:55.166 10:44:44 -- scripts/common.sh@339 -- # ver1_l=2 00:06:55.166 10:44:44 -- scripts/common.sh@340 -- # ver2_l=1 00:06:55.166 10:44:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:55.166 10:44:44 -- scripts/common.sh@343 -- # case "$op" in 00:06:55.166 10:44:44 -- scripts/common.sh@344 -- # : 1 00:06:55.166 10:44:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:55.166 10:44:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.166 10:44:44 -- scripts/common.sh@364 -- # decimal 1 00:06:55.166 10:44:44 -- scripts/common.sh@352 -- # local d=1 00:06:55.166 10:44:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.166 10:44:44 -- scripts/common.sh@354 -- # echo 1 00:06:55.166 10:44:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:55.166 10:44:44 -- scripts/common.sh@365 -- # decimal 2 00:06:55.166 10:44:44 -- scripts/common.sh@352 -- # local d=2 00:06:55.166 10:44:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.166 10:44:44 -- scripts/common.sh@354 -- # echo 2 00:06:55.166 10:44:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:55.166 10:44:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:55.166 10:44:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:55.166 10:44:44 -- scripts/common.sh@367 -- # return 0 00:06:55.166 10:44:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.166 10:44:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:55.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.166 --rc genhtml_branch_coverage=1 00:06:55.166 --rc genhtml_function_coverage=1 00:06:55.166 --rc genhtml_legend=1 00:06:55.166 --rc geninfo_all_blocks=1 00:06:55.166 --rc geninfo_unexecuted_blocks=1 00:06:55.166 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.166 ' 00:06:55.166 10:44:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:55.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.166 --rc genhtml_branch_coverage=1 00:06:55.166 --rc genhtml_function_coverage=1 00:06:55.166 --rc genhtml_legend=1 00:06:55.166 --rc geninfo_all_blocks=1 00:06:55.166 --rc geninfo_unexecuted_blocks=1 00:06:55.166 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.166 ' 00:06:55.166 10:44:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:55.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.166 --rc genhtml_branch_coverage=1 00:06:55.166 --rc genhtml_function_coverage=1 00:06:55.166 --rc genhtml_legend=1 00:06:55.166 --rc geninfo_all_blocks=1 00:06:55.166 --rc geninfo_unexecuted_blocks=1 00:06:55.166 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.166 ' 00:06:55.166 10:44:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:55.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.166 --rc genhtml_branch_coverage=1 00:06:55.166 --rc genhtml_function_coverage=1 00:06:55.166 --rc genhtml_legend=1 00:06:55.166 --rc geninfo_all_blocks=1 00:06:55.166 --rc geninfo_unexecuted_blocks=1 00:06:55.166 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.166 ' 00:06:55.166 10:44:44 -- app/version.sh@17 -- # get_header_version major 00:06:55.166 10:44:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:55.166 10:44:44 -- app/version.sh@14 -- # cut -f2 00:06:55.166 10:44:44 -- app/version.sh@14 -- # tr -d '"' 00:06:55.166 10:44:44 -- app/version.sh@17 -- # major=24 00:06:55.166 10:44:44 -- app/version.sh@18 -- # get_header_version minor 00:06:55.166 10:44:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:55.166 10:44:44 -- app/version.sh@14 -- # cut -f2 00:06:55.166 10:44:44 -- app/version.sh@14 -- # tr -d '"' 00:06:55.166 10:44:44 -- app/version.sh@18 -- # minor=1 00:06:55.166 10:44:44 -- app/version.sh@19 -- # get_header_version patch 00:06:55.166 10:44:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:55.166 10:44:44 -- app/version.sh@14 -- # cut -f2 00:06:55.166 10:44:44 -- app/version.sh@14 -- # tr -d '"' 00:06:55.166 10:44:44 -- app/version.sh@19 -- # patch=1 00:06:55.166 10:44:44 -- app/version.sh@20 -- # get_header_version suffix 00:06:55.166 10:44:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:55.166 10:44:44 -- app/version.sh@14 -- # cut -f2 00:06:55.166 10:44:44 -- app/version.sh@14 -- # tr -d '"' 00:06:55.166 10:44:44 -- app/version.sh@20 -- # suffix=-pre 00:06:55.166 10:44:44 -- app/version.sh@22 -- # version=24.1 00:06:55.166 10:44:44 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:55.166 10:44:44 -- app/version.sh@25 -- # version=24.1.1 00:06:55.166 10:44:44 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:55.166 10:44:44 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:55.166 10:44:44 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:55.426 10:44:44 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:55.426 10:44:44 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:55.426 00:06:55.426 real 0m0.261s 00:06:55.426 user 0m0.155s 00:06:55.426 sys 0m0.158s 00:06:55.426 10:44:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.426 10:44:44 -- common/autotest_common.sh@10 -- # set +x 00:06:55.426 ************************************ 00:06:55.426 END TEST version 00:06:55.426 ************************************ 00:06:55.426 10:44:44 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:06:55.426 10:44:44 -- spdk/autotest.sh@191 -- # uname -s 00:06:55.426 10:44:44 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:06:55.426 10:44:44 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:06:55.426 10:44:44 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:06:55.426 10:44:44 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:06:55.426 10:44:44 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:06:55.426 10:44:44 -- spdk/autotest.sh@255 -- # timing_exit lib 00:06:55.426 10:44:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:55.426 10:44:44 -- common/autotest_common.sh@10 -- # set +x 00:06:55.426 10:44:44 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:06:55.426 10:44:44 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:06:55.426 10:44:44 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:06:55.426 10:44:44 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:06:55.426 10:44:44 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:06:55.426 10:44:44 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:06:55.426 10:44:44 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:06:55.426 10:44:44 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:06:55.426 10:44:44 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:06:55.426 10:44:44 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:06:55.426 10:44:44 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:06:55.426 10:44:44 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:06:55.426 10:44:44 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:06:55.426 10:44:44 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:06:55.426 10:44:44 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:06:55.426 10:44:44 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:06:55.426 10:44:44 -- spdk/autotest.sh@361 -- # [[ 1 -eq 1 ]] 00:06:55.426 10:44:44 -- spdk/autotest.sh@362 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:55.426 10:44:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:55.426 10:44:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.426 10:44:44 -- common/autotest_common.sh@10 -- # set +x 00:06:55.426 ************************************ 00:06:55.426 START TEST llvm_fuzz 00:06:55.426 ************************************ 00:06:55.426 10:44:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:55.426 * Looking for test storage... 00:06:55.426 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:55.426 10:44:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:55.426 10:44:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:55.426 10:44:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:55.686 10:44:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:55.686 10:44:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:55.686 10:44:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:55.686 10:44:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:55.686 10:44:44 -- scripts/common.sh@335 -- # IFS=.-: 00:06:55.686 10:44:44 -- scripts/common.sh@335 -- # read -ra ver1 00:06:55.686 10:44:44 -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.686 10:44:44 -- scripts/common.sh@336 -- # read -ra ver2 00:06:55.686 10:44:44 -- scripts/common.sh@337 -- # local 'op=<' 00:06:55.686 10:44:44 -- scripts/common.sh@339 -- # ver1_l=2 00:06:55.686 10:44:44 -- scripts/common.sh@340 -- # ver2_l=1 00:06:55.686 10:44:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:55.686 10:44:44 -- scripts/common.sh@343 -- # case "$op" in 00:06:55.686 10:44:44 -- scripts/common.sh@344 -- # : 1 00:06:55.686 10:44:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:55.686 10:44:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.686 10:44:44 -- scripts/common.sh@364 -- # decimal 1 00:06:55.686 10:44:44 -- scripts/common.sh@352 -- # local d=1 00:06:55.686 10:44:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.686 10:44:44 -- scripts/common.sh@354 -- # echo 1 00:06:55.686 10:44:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:55.686 10:44:44 -- scripts/common.sh@365 -- # decimal 2 00:06:55.686 10:44:44 -- scripts/common.sh@352 -- # local d=2 00:06:55.686 10:44:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.686 10:44:44 -- scripts/common.sh@354 -- # echo 2 00:06:55.686 10:44:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:55.686 10:44:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:55.686 10:44:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:55.686 10:44:44 -- scripts/common.sh@367 -- # return 0 00:06:55.686 10:44:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.686 10:44:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:55.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.686 --rc genhtml_branch_coverage=1 00:06:55.686 --rc genhtml_function_coverage=1 00:06:55.686 --rc genhtml_legend=1 00:06:55.686 --rc geninfo_all_blocks=1 00:06:55.686 --rc geninfo_unexecuted_blocks=1 00:06:55.686 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.686 ' 00:06:55.686 10:44:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:55.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.686 --rc genhtml_branch_coverage=1 00:06:55.686 --rc genhtml_function_coverage=1 00:06:55.686 --rc genhtml_legend=1 00:06:55.686 --rc geninfo_all_blocks=1 00:06:55.686 --rc geninfo_unexecuted_blocks=1 00:06:55.686 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.686 ' 00:06:55.686 10:44:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:55.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.686 --rc genhtml_branch_coverage=1 00:06:55.686 --rc genhtml_function_coverage=1 00:06:55.686 --rc genhtml_legend=1 00:06:55.686 --rc geninfo_all_blocks=1 00:06:55.686 --rc geninfo_unexecuted_blocks=1 00:06:55.686 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.686 ' 00:06:55.686 10:44:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:55.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.686 --rc genhtml_branch_coverage=1 00:06:55.686 --rc genhtml_function_coverage=1 00:06:55.686 --rc genhtml_legend=1 00:06:55.686 --rc geninfo_all_blocks=1 00:06:55.686 --rc geninfo_unexecuted_blocks=1 00:06:55.686 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.686 ' 00:06:55.686 10:44:44 -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:55.686 10:44:44 -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:55.686 10:44:44 -- common/autotest_common.sh@548 -- # fuzzers=() 00:06:55.686 10:44:44 -- common/autotest_common.sh@548 -- # local fuzzers 00:06:55.686 10:44:44 -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:06:55.686 10:44:44 -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:55.686 10:44:44 -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:55.686 10:44:44 -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:55.686 10:44:44 -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:55.686 10:44:44 -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:06:55.686 10:44:44 -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:55.686 10:44:44 -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:55.686 10:44:44 -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:55.686 10:44:44 -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:55.686 10:44:44 -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:55.686 10:44:44 -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:55.686 10:44:44 -- fuzz/llvm.sh@19 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:55.686 10:44:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:55.686 10:44:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.686 10:44:44 -- common/autotest_common.sh@10 -- # set +x 00:06:55.686 ************************************ 00:06:55.687 START TEST nvmf_fuzz 00:06:55.687 ************************************ 00:06:55.687 10:44:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:55.687 * Looking for test storage... 00:06:55.687 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:55.687 10:44:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:55.687 10:44:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:55.687 10:44:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:55.687 10:44:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:55.687 10:44:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:55.687 10:44:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:55.687 10:44:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:55.687 10:44:44 -- scripts/common.sh@335 -- # IFS=.-: 00:06:55.687 10:44:44 -- scripts/common.sh@335 -- # read -ra ver1 00:06:55.687 10:44:44 -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.687 10:44:44 -- scripts/common.sh@336 -- # read -ra ver2 00:06:55.687 10:44:44 -- scripts/common.sh@337 -- # local 'op=<' 00:06:55.687 10:44:44 -- scripts/common.sh@339 -- # ver1_l=2 00:06:55.687 10:44:44 -- scripts/common.sh@340 -- # ver2_l=1 00:06:55.687 10:44:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:55.687 10:44:44 -- scripts/common.sh@343 -- # case "$op" in 00:06:55.687 10:44:44 -- scripts/common.sh@344 -- # : 1 00:06:55.687 10:44:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:55.687 10:44:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.687 10:44:44 -- scripts/common.sh@364 -- # decimal 1 00:06:55.687 10:44:44 -- scripts/common.sh@352 -- # local d=1 00:06:55.687 10:44:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.687 10:44:44 -- scripts/common.sh@354 -- # echo 1 00:06:55.687 10:44:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:55.687 10:44:44 -- scripts/common.sh@365 -- # decimal 2 00:06:55.687 10:44:44 -- scripts/common.sh@352 -- # local d=2 00:06:55.687 10:44:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.687 10:44:44 -- scripts/common.sh@354 -- # echo 2 00:06:55.687 10:44:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:55.687 10:44:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:55.687 10:44:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:55.687 10:44:44 -- scripts/common.sh@367 -- # return 0 00:06:55.687 10:44:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.687 10:44:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:55.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.687 --rc genhtml_branch_coverage=1 00:06:55.687 --rc genhtml_function_coverage=1 00:06:55.687 --rc genhtml_legend=1 00:06:55.687 --rc geninfo_all_blocks=1 00:06:55.687 --rc geninfo_unexecuted_blocks=1 00:06:55.687 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.687 ' 00:06:55.687 10:44:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:55.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.687 --rc genhtml_branch_coverage=1 00:06:55.687 --rc genhtml_function_coverage=1 00:06:55.687 --rc genhtml_legend=1 00:06:55.687 --rc geninfo_all_blocks=1 00:06:55.687 --rc geninfo_unexecuted_blocks=1 00:06:55.687 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.687 ' 00:06:55.687 10:44:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:55.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.687 --rc genhtml_branch_coverage=1 00:06:55.687 --rc genhtml_function_coverage=1 00:06:55.687 --rc genhtml_legend=1 00:06:55.687 --rc geninfo_all_blocks=1 00:06:55.687 --rc geninfo_unexecuted_blocks=1 00:06:55.687 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.687 ' 00:06:55.687 10:44:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:55.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.687 --rc genhtml_branch_coverage=1 00:06:55.687 --rc genhtml_function_coverage=1 00:06:55.687 --rc genhtml_legend=1 00:06:55.687 --rc geninfo_all_blocks=1 00:06:55.687 --rc geninfo_unexecuted_blocks=1 00:06:55.687 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.687 ' 00:06:55.687 10:44:44 -- nvmf/run.sh@52 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:55.687 10:44:44 -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:55.687 10:44:44 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:55.687 10:44:44 -- common/autotest_common.sh@34 -- # set -e 00:06:55.687 10:44:44 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:55.687 10:44:44 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:55.687 10:44:44 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:55.687 10:44:44 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:55.687 10:44:44 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:55.687 10:44:44 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:55.687 10:44:44 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:55.687 10:44:44 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:55.687 10:44:44 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:55.687 10:44:44 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:55.687 10:44:44 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:55.687 10:44:44 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:55.687 10:44:44 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:55.687 10:44:44 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:55.687 10:44:44 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:55.687 10:44:44 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:55.687 10:44:44 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:55.687 10:44:44 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:55.687 10:44:44 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:55.687 10:44:44 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:55.687 10:44:44 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:55.687 10:44:44 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:55.687 10:44:44 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:55.687 10:44:44 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:55.687 10:44:44 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:55.687 10:44:44 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:55.687 10:44:44 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:55.687 10:44:44 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:55.687 10:44:44 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:55.687 10:44:44 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:55.687 10:44:44 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:55.687 10:44:44 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:55.687 10:44:44 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:55.687 10:44:44 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:55.687 10:44:44 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:55.687 10:44:44 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:55.687 10:44:44 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:55.687 10:44:44 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:55.687 10:44:44 -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:06:55.687 10:44:44 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:55.687 10:44:44 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:55.687 10:44:44 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:55.687 10:44:44 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:55.687 10:44:44 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:55.687 10:44:44 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:55.687 10:44:44 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:55.687 10:44:44 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:55.687 10:44:44 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:55.687 10:44:44 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:55.687 10:44:44 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:55.687 10:44:44 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:55.687 10:44:44 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:55.687 10:44:44 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:55.687 10:44:44 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:55.687 10:44:44 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:55.687 10:44:44 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:55.687 10:44:44 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:06:55.687 10:44:44 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:55.687 10:44:44 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:55.687 10:44:44 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:55.687 10:44:44 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:55.687 10:44:44 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:06:55.688 10:44:44 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:55.688 10:44:44 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:06:55.688 10:44:44 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:55.688 10:44:44 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:55.688 10:44:44 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:55.688 10:44:44 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:06:55.688 10:44:44 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:55.688 10:44:44 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:55.688 10:44:44 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:55.688 10:44:44 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:06:55.688 10:44:44 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:55.688 10:44:44 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:55.688 10:44:44 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:55.688 10:44:44 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:55.688 10:44:44 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:55.688 10:44:44 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:55.688 10:44:44 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:55.688 10:44:44 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:55.688 10:44:44 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:55.688 10:44:44 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:55.688 10:44:44 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:06:55.688 10:44:44 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:55.688 10:44:44 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:55.688 10:44:44 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:55.949 10:44:44 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:55.949 10:44:44 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:55.949 10:44:44 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:55.949 10:44:44 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:55.949 10:44:44 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:55.949 10:44:44 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:55.949 10:44:44 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:55.949 10:44:44 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:55.949 10:44:44 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:55.949 10:44:44 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:55.949 10:44:44 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:55.949 10:44:44 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:55.949 10:44:44 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:55.949 #define SPDK_CONFIG_H 00:06:55.949 #define SPDK_CONFIG_APPS 1 00:06:55.949 #define SPDK_CONFIG_ARCH native 00:06:55.949 #undef SPDK_CONFIG_ASAN 00:06:55.949 #undef SPDK_CONFIG_AVAHI 00:06:55.949 #undef SPDK_CONFIG_CET 00:06:55.949 #define SPDK_CONFIG_COVERAGE 1 00:06:55.949 #define SPDK_CONFIG_CROSS_PREFIX 00:06:55.949 #undef SPDK_CONFIG_CRYPTO 00:06:55.949 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:55.949 #undef SPDK_CONFIG_CUSTOMOCF 00:06:55.949 #undef SPDK_CONFIG_DAOS 00:06:55.949 #define SPDK_CONFIG_DAOS_DIR 00:06:55.949 #define SPDK_CONFIG_DEBUG 1 00:06:55.949 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:55.949 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:55.949 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:55.949 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:55.949 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:55.949 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:55.949 #define SPDK_CONFIG_EXAMPLES 1 00:06:55.949 #undef SPDK_CONFIG_FC 00:06:55.949 #define SPDK_CONFIG_FC_PATH 00:06:55.949 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:55.949 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:55.949 #undef SPDK_CONFIG_FUSE 00:06:55.949 #define SPDK_CONFIG_FUZZER 1 00:06:55.949 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:55.949 #undef SPDK_CONFIG_GOLANG 00:06:55.949 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:55.949 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:55.949 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:55.949 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:55.949 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:55.949 #define SPDK_CONFIG_IDXD 1 00:06:55.949 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:55.949 #undef SPDK_CONFIG_IPSEC_MB 00:06:55.949 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:55.949 #define SPDK_CONFIG_ISAL 1 00:06:55.949 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:55.949 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:55.949 #define SPDK_CONFIG_LIBDIR 00:06:55.949 #undef SPDK_CONFIG_LTO 00:06:55.949 #define SPDK_CONFIG_MAX_LCORES 00:06:55.949 #define SPDK_CONFIG_NVME_CUSE 1 00:06:55.949 #undef SPDK_CONFIG_OCF 00:06:55.949 #define SPDK_CONFIG_OCF_PATH 00:06:55.949 #define SPDK_CONFIG_OPENSSL_PATH 00:06:55.949 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:55.949 #undef SPDK_CONFIG_PGO_USE 00:06:55.949 #define SPDK_CONFIG_PREFIX /usr/local 00:06:55.949 #undef SPDK_CONFIG_RAID5F 00:06:55.949 #undef SPDK_CONFIG_RBD 00:06:55.949 #define SPDK_CONFIG_RDMA 1 00:06:55.949 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:55.949 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:55.949 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:55.949 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:55.949 #undef SPDK_CONFIG_SHARED 00:06:55.949 #undef SPDK_CONFIG_SMA 00:06:55.949 #define SPDK_CONFIG_TESTS 1 00:06:55.949 #undef SPDK_CONFIG_TSAN 00:06:55.949 #define SPDK_CONFIG_UBLK 1 00:06:55.949 #define SPDK_CONFIG_UBSAN 1 00:06:55.949 #undef SPDK_CONFIG_UNIT_TESTS 00:06:55.949 #undef SPDK_CONFIG_URING 00:06:55.949 #define SPDK_CONFIG_URING_PATH 00:06:55.949 #undef SPDK_CONFIG_URING_ZNS 00:06:55.949 #undef SPDK_CONFIG_USDT 00:06:55.949 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:55.949 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:55.949 #define SPDK_CONFIG_VFIO_USER 1 00:06:55.949 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:55.949 #define SPDK_CONFIG_VHOST 1 00:06:55.949 #define SPDK_CONFIG_VIRTIO 1 00:06:55.949 #undef SPDK_CONFIG_VTUNE 00:06:55.949 #define SPDK_CONFIG_VTUNE_DIR 00:06:55.949 #define SPDK_CONFIG_WERROR 1 00:06:55.949 #define SPDK_CONFIG_WPDK_DIR 00:06:55.949 #undef SPDK_CONFIG_XNVME 00:06:55.949 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:55.949 10:44:44 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:55.949 10:44:44 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:55.949 10:44:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.949 10:44:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.949 10:44:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.949 10:44:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.949 10:44:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.949 10:44:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.949 10:44:44 -- paths/export.sh@5 -- # export PATH 00:06:55.949 10:44:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.949 10:44:44 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:55.949 10:44:44 -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:55.949 10:44:44 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:55.949 10:44:44 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:55.949 10:44:44 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:55.949 10:44:44 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:55.949 10:44:44 -- pm/common@16 -- # TEST_TAG=N/A 00:06:55.949 10:44:44 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:55.949 10:44:44 -- common/autotest_common.sh@52 -- # : 1 00:06:55.949 10:44:44 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:06:55.949 10:44:44 -- common/autotest_common.sh@56 -- # : 0 00:06:55.949 10:44:44 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:55.949 10:44:44 -- common/autotest_common.sh@58 -- # : 0 00:06:55.949 10:44:44 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:06:55.949 10:44:44 -- common/autotest_common.sh@60 -- # : 1 00:06:55.949 10:44:44 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:55.949 10:44:44 -- common/autotest_common.sh@62 -- # : 0 00:06:55.949 10:44:44 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:06:55.949 10:44:44 -- common/autotest_common.sh@64 -- # : 00:06:55.950 10:44:44 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:06:55.950 10:44:44 -- common/autotest_common.sh@66 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:06:55.950 10:44:44 -- common/autotest_common.sh@68 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:06:55.950 10:44:44 -- common/autotest_common.sh@70 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:06:55.950 10:44:44 -- common/autotest_common.sh@72 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:55.950 10:44:44 -- common/autotest_common.sh@74 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:06:55.950 10:44:44 -- common/autotest_common.sh@76 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:06:55.950 10:44:44 -- common/autotest_common.sh@78 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:06:55.950 10:44:44 -- common/autotest_common.sh@80 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:06:55.950 10:44:44 -- common/autotest_common.sh@82 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:06:55.950 10:44:44 -- common/autotest_common.sh@84 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:06:55.950 10:44:44 -- common/autotest_common.sh@86 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:06:55.950 10:44:44 -- common/autotest_common.sh@88 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:06:55.950 10:44:44 -- common/autotest_common.sh@90 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:55.950 10:44:44 -- common/autotest_common.sh@92 -- # : 1 00:06:55.950 10:44:44 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:06:55.950 10:44:44 -- common/autotest_common.sh@94 -- # : 1 00:06:55.950 10:44:44 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:06:55.950 10:44:44 -- common/autotest_common.sh@96 -- # : rdma 00:06:55.950 10:44:44 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:55.950 10:44:44 -- common/autotest_common.sh@98 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:06:55.950 10:44:44 -- common/autotest_common.sh@100 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:06:55.950 10:44:44 -- common/autotest_common.sh@102 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:06:55.950 10:44:44 -- common/autotest_common.sh@104 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:06:55.950 10:44:44 -- common/autotest_common.sh@106 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:06:55.950 10:44:44 -- common/autotest_common.sh@108 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:06:55.950 10:44:44 -- common/autotest_common.sh@110 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:06:55.950 10:44:44 -- common/autotest_common.sh@112 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:55.950 10:44:44 -- common/autotest_common.sh@114 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:06:55.950 10:44:44 -- common/autotest_common.sh@116 -- # : 1 00:06:55.950 10:44:44 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:06:55.950 10:44:44 -- common/autotest_common.sh@118 -- # : 00:06:55.950 10:44:44 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:55.950 10:44:44 -- common/autotest_common.sh@120 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:06:55.950 10:44:44 -- common/autotest_common.sh@122 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:06:55.950 10:44:44 -- common/autotest_common.sh@124 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:06:55.950 10:44:44 -- common/autotest_common.sh@126 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:06:55.950 10:44:44 -- common/autotest_common.sh@128 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:06:55.950 10:44:44 -- common/autotest_common.sh@130 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:06:55.950 10:44:44 -- common/autotest_common.sh@132 -- # : 00:06:55.950 10:44:44 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:06:55.950 10:44:44 -- common/autotest_common.sh@134 -- # : true 00:06:55.950 10:44:44 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:06:55.950 10:44:44 -- common/autotest_common.sh@136 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:06:55.950 10:44:44 -- common/autotest_common.sh@138 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:06:55.950 10:44:44 -- common/autotest_common.sh@140 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:06:55.950 10:44:44 -- common/autotest_common.sh@142 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:06:55.950 10:44:44 -- common/autotest_common.sh@144 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:06:55.950 10:44:44 -- common/autotest_common.sh@146 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:06:55.950 10:44:44 -- common/autotest_common.sh@148 -- # : 00:06:55.950 10:44:44 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:06:55.950 10:44:44 -- common/autotest_common.sh@150 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:06:55.950 10:44:44 -- common/autotest_common.sh@152 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:06:55.950 10:44:44 -- common/autotest_common.sh@154 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:06:55.950 10:44:44 -- common/autotest_common.sh@156 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:06:55.950 10:44:44 -- common/autotest_common.sh@158 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:06:55.950 10:44:44 -- common/autotest_common.sh@160 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:06:55.950 10:44:44 -- common/autotest_common.sh@163 -- # : 00:06:55.950 10:44:44 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:06:55.950 10:44:44 -- common/autotest_common.sh@165 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:06:55.950 10:44:44 -- common/autotest_common.sh@167 -- # : 0 00:06:55.950 10:44:44 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:55.950 10:44:44 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:55.950 10:44:44 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:55.950 10:44:44 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:55.950 10:44:44 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:55.950 10:44:44 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:55.950 10:44:44 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:55.950 10:44:44 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:55.950 10:44:44 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:55.950 10:44:44 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:55.950 10:44:44 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:55.950 10:44:44 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:55.950 10:44:44 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:55.950 10:44:44 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:55.950 10:44:44 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:06:55.950 10:44:44 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:55.950 10:44:44 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:55.950 10:44:44 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:55.950 10:44:44 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:55.950 10:44:44 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:55.950 10:44:44 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:06:55.951 10:44:44 -- common/autotest_common.sh@196 -- # cat 00:06:55.951 10:44:44 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:06:55.951 10:44:44 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:55.951 10:44:44 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:55.951 10:44:44 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:55.951 10:44:44 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:55.951 10:44:44 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:06:55.951 10:44:44 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:06:55.951 10:44:44 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:55.951 10:44:44 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:55.951 10:44:44 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:55.951 10:44:44 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:55.951 10:44:44 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:55.951 10:44:44 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:55.951 10:44:44 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:55.951 10:44:44 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:55.951 10:44:44 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:55.951 10:44:44 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:55.951 10:44:44 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:55.951 10:44:44 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:55.951 10:44:44 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:06:55.951 10:44:44 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:06:55.951 10:44:44 -- common/autotest_common.sh@249 -- # _LCOV= 00:06:55.951 10:44:44 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:06:55.951 10:44:44 -- common/autotest_common.sh@250 -- # [[ 1 -eq 1 ]] 00:06:55.951 10:44:44 -- common/autotest_common.sh@250 -- # _LCOV=1 00:06:55.951 10:44:44 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:55.951 10:44:44 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:06:55.951 10:44:44 -- common/autotest_common.sh@255 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:55.951 10:44:44 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:06:55.951 10:44:44 -- common/autotest_common.sh@259 -- # export valgrind= 00:06:55.951 10:44:44 -- common/autotest_common.sh@259 -- # valgrind= 00:06:55.951 10:44:44 -- common/autotest_common.sh@265 -- # uname -s 00:06:55.951 10:44:44 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:06:55.951 10:44:44 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:06:55.951 10:44:44 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:06:55.951 10:44:44 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:06:55.951 10:44:44 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:06:55.951 10:44:44 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:06:55.951 10:44:44 -- common/autotest_common.sh@275 -- # MAKE=make 00:06:55.951 10:44:44 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j112 00:06:55.951 10:44:44 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:06:55.951 10:44:44 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:06:55.951 10:44:44 -- common/autotest_common.sh@294 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:55.951 10:44:44 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:06:55.951 10:44:44 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:06:55.951 10:44:44 -- common/autotest_common.sh@319 -- # [[ -z 1305853 ]] 00:06:55.951 10:44:44 -- common/autotest_common.sh@319 -- # kill -0 1305853 00:06:55.951 10:44:44 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:06:55.951 10:44:44 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:06:55.951 10:44:44 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:06:55.951 10:44:44 -- common/autotest_common.sh@332 -- # local mount target_dir 00:06:55.951 10:44:44 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:06:55.951 10:44:44 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:06:55.951 10:44:44 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:06:55.951 10:44:44 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:06:55.951 10:44:44 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.lF1Y2d 00:06:55.951 10:44:44 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:55.951 10:44:44 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:06:55.951 10:44:44 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:06:55.951 10:44:44 -- common/autotest_common.sh@356 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.lF1Y2d/tests/nvmf /tmp/spdk.lF1Y2d 00:06:55.951 10:44:44 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:06:55.951 10:44:44 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:55.951 10:44:44 -- common/autotest_common.sh@328 -- # df -T 00:06:55.951 10:44:44 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:06:55.951 10:44:44 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_devtmpfs 00:06:55.951 10:44:44 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:06:55.951 10:44:44 -- common/autotest_common.sh@363 -- # avails["$mount"]=67108864 00:06:55.951 10:44:44 -- common/autotest_common.sh@363 -- # sizes["$mount"]=67108864 00:06:55.951 10:44:44 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:06:55.951 10:44:44 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:55.951 10:44:44 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/pmem0 00:06:55.951 10:44:44 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext2 00:06:55.951 10:44:44 -- common/autotest_common.sh@363 -- # avails["$mount"]=785162240 00:06:55.951 10:44:44 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5284429824 00:06:55.951 10:44:44 -- common/autotest_common.sh@364 -- # uses["$mount"]=4499267584 00:06:55.951 10:44:44 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:55.951 10:44:44 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_root 00:06:55.951 10:44:44 -- common/autotest_common.sh@362 -- # fss["$mount"]=overlay 00:06:55.951 10:44:44 -- common/autotest_common.sh@363 -- # avails["$mount"]=54553026560 00:06:55.951 10:44:44 -- common/autotest_common.sh@363 -- # sizes["$mount"]=61730594816 00:06:55.951 10:44:44 -- common/autotest_common.sh@364 -- # uses["$mount"]=7177568256 00:06:55.951 10:44:44 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:55.951 10:44:44 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:06:55.951 10:44:44 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:06:55.951 10:44:44 -- common/autotest_common.sh@363 -- # avails["$mount"]=30864039936 00:06:55.951 10:44:44 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865297408 00:06:55.951 10:44:44 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:06:55.951 10:44:44 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:55.951 10:44:44 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:06:55.951 10:44:44 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:06:55.951 10:44:44 -- common/autotest_common.sh@363 -- # avails["$mount"]=12340121600 00:06:55.951 10:44:44 -- common/autotest_common.sh@363 -- # sizes["$mount"]=12346122240 00:06:55.951 10:44:44 -- common/autotest_common.sh@364 -- # uses["$mount"]=6000640 00:06:55.951 10:44:44 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:55.951 10:44:44 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:06:55.951 10:44:44 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:06:55.951 10:44:44 -- common/autotest_common.sh@363 -- # avails["$mount"]=30865096704 00:06:55.951 10:44:44 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865297408 00:06:55.951 10:44:44 -- common/autotest_common.sh@364 -- # uses["$mount"]=200704 00:06:55.951 10:44:44 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:55.951 10:44:44 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:06:55.951 10:44:44 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:06:55.951 10:44:44 -- common/autotest_common.sh@363 -- # avails["$mount"]=6173044736 00:06:55.951 10:44:44 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6173057024 00:06:55.951 10:44:44 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:06:55.951 10:44:44 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:55.951 10:44:44 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:06:55.951 * Looking for test storage... 00:06:55.951 10:44:44 -- common/autotest_common.sh@369 -- # local target_space new_size 00:06:55.951 10:44:44 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:06:55.951 10:44:44 -- common/autotest_common.sh@373 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:55.951 10:44:44 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:55.951 10:44:44 -- common/autotest_common.sh@373 -- # mount=/ 00:06:55.951 10:44:44 -- common/autotest_common.sh@375 -- # target_space=54553026560 00:06:55.951 10:44:44 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:06:55.951 10:44:44 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:06:55.951 10:44:44 -- common/autotest_common.sh@381 -- # [[ overlay == tmpfs ]] 00:06:55.951 10:44:44 -- common/autotest_common.sh@381 -- # [[ overlay == ramfs ]] 00:06:55.951 10:44:44 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:06:55.951 10:44:44 -- common/autotest_common.sh@382 -- # new_size=9392160768 00:06:55.951 10:44:44 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:55.951 10:44:44 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:55.951 10:44:44 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:55.951 10:44:44 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:55.951 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:55.951 10:44:44 -- common/autotest_common.sh@390 -- # return 0 00:06:55.951 10:44:44 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:06:55.951 10:44:44 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:06:55.951 10:44:44 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:55.951 10:44:44 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:55.951 10:44:44 -- common/autotest_common.sh@1682 -- # true 00:06:55.952 10:44:44 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:06:55.952 10:44:44 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:55.952 10:44:44 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:55.952 10:44:44 -- common/autotest_common.sh@27 -- # exec 00:06:55.952 10:44:44 -- common/autotest_common.sh@29 -- # exec 00:06:55.952 10:44:44 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:55.952 10:44:44 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:55.952 10:44:44 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:55.952 10:44:44 -- common/autotest_common.sh@18 -- # set -x 00:06:55.952 10:44:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:55.952 10:44:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:55.952 10:44:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:55.952 10:44:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:55.952 10:44:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:55.952 10:44:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:55.952 10:44:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:55.952 10:44:44 -- scripts/common.sh@335 -- # IFS=.-: 00:06:55.952 10:44:44 -- scripts/common.sh@335 -- # read -ra ver1 00:06:55.952 10:44:44 -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.952 10:44:44 -- scripts/common.sh@336 -- # read -ra ver2 00:06:55.952 10:44:44 -- scripts/common.sh@337 -- # local 'op=<' 00:06:55.952 10:44:44 -- scripts/common.sh@339 -- # ver1_l=2 00:06:55.952 10:44:44 -- scripts/common.sh@340 -- # ver2_l=1 00:06:55.952 10:44:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:55.952 10:44:44 -- scripts/common.sh@343 -- # case "$op" in 00:06:55.952 10:44:44 -- scripts/common.sh@344 -- # : 1 00:06:55.952 10:44:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:55.952 10:44:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.952 10:44:44 -- scripts/common.sh@364 -- # decimal 1 00:06:55.952 10:44:44 -- scripts/common.sh@352 -- # local d=1 00:06:55.952 10:44:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.952 10:44:44 -- scripts/common.sh@354 -- # echo 1 00:06:55.952 10:44:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:55.952 10:44:44 -- scripts/common.sh@365 -- # decimal 2 00:06:55.952 10:44:44 -- scripts/common.sh@352 -- # local d=2 00:06:55.952 10:44:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.952 10:44:44 -- scripts/common.sh@354 -- # echo 2 00:06:55.952 10:44:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:55.952 10:44:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:55.952 10:44:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:55.952 10:44:44 -- scripts/common.sh@367 -- # return 0 00:06:55.952 10:44:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.952 10:44:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:55.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.952 --rc genhtml_branch_coverage=1 00:06:55.952 --rc genhtml_function_coverage=1 00:06:55.952 --rc genhtml_legend=1 00:06:55.952 --rc geninfo_all_blocks=1 00:06:55.952 --rc geninfo_unexecuted_blocks=1 00:06:55.952 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.952 ' 00:06:55.952 10:44:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:55.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.952 --rc genhtml_branch_coverage=1 00:06:55.952 --rc genhtml_function_coverage=1 00:06:55.952 --rc genhtml_legend=1 00:06:55.952 --rc geninfo_all_blocks=1 00:06:55.952 --rc geninfo_unexecuted_blocks=1 00:06:55.952 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.952 ' 00:06:55.952 10:44:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:55.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.952 --rc genhtml_branch_coverage=1 00:06:55.952 --rc genhtml_function_coverage=1 00:06:55.952 --rc genhtml_legend=1 00:06:55.952 --rc geninfo_all_blocks=1 00:06:55.952 --rc geninfo_unexecuted_blocks=1 00:06:55.952 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.952 ' 00:06:55.952 10:44:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:55.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.952 --rc genhtml_branch_coverage=1 00:06:55.952 --rc genhtml_function_coverage=1 00:06:55.952 --rc genhtml_legend=1 00:06:55.952 --rc geninfo_all_blocks=1 00:06:55.952 --rc geninfo_unexecuted_blocks=1 00:06:55.952 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:55.952 ' 00:06:55.952 10:44:44 -- nvmf/run.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:55.952 10:44:44 -- ../common.sh@8 -- # pids=() 00:06:55.952 10:44:44 -- nvmf/run.sh@55 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:55.952 10:44:44 -- nvmf/run.sh@56 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:55.952 10:44:44 -- nvmf/run.sh@56 -- # fuzz_num=25 00:06:55.952 10:44:44 -- nvmf/run.sh@57 -- # (( fuzz_num != 0 )) 00:06:55.952 10:44:44 -- nvmf/run.sh@59 -- # trap 'cleanup /tmp/llvm_fuzz*; exit 1' SIGINT SIGTERM EXIT 00:06:55.952 10:44:44 -- nvmf/run.sh@61 -- # mem_size=512 00:06:55.952 10:44:44 -- nvmf/run.sh@62 -- # [[ 1 -eq 1 ]] 00:06:55.952 10:44:44 -- nvmf/run.sh@63 -- # start_llvm_fuzz_short 25 1 00:06:55.952 10:44:44 -- ../common.sh@69 -- # local fuzz_num=25 00:06:55.952 10:44:44 -- ../common.sh@70 -- # local time=1 00:06:55.952 10:44:44 -- ../common.sh@72 -- # (( i = 0 )) 00:06:55.952 10:44:44 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:55.952 10:44:44 -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:55.952 10:44:44 -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:55.952 10:44:44 -- nvmf/run.sh@24 -- # local timen=1 00:06:55.952 10:44:44 -- nvmf/run.sh@25 -- # local core=0x1 00:06:55.952 10:44:44 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:55.952 10:44:44 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:55.952 10:44:44 -- nvmf/run.sh@29 -- # printf %02d 0 00:06:55.952 10:44:44 -- nvmf/run.sh@29 -- # port=4400 00:06:55.952 10:44:44 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:55.952 10:44:44 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:55.952 10:44:44 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:55.952 10:44:44 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 -r /var/tmp/spdk0.sock 00:06:55.952 [2024-12-15 10:44:44.915341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.952 [2024-12-15 10:44:44.915409] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1305915 ] 00:06:55.952 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.217 [2024-12-15 10:44:45.091350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.217 [2024-12-15 10:44:45.154346] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:56.217 [2024-12-15 10:44:45.154478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.217 [2024-12-15 10:44:45.213316] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.217 [2024-12-15 10:44:45.229661] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:56.477 INFO: Running with entropic power schedule (0xFF, 100). 00:06:56.477 INFO: Seed: 94891365 00:06:56.477 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:06:56.477 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:06:56.477 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:56.477 INFO: A corpus is not provided, starting from an empty corpus 00:06:56.477 #2 INITED exec/s: 0 rss: 60Mb 00:06:56.477 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:56.477 This may also happen if the target rejected all inputs we tried so far 00:06:56.477 [2024-12-15 10:44:45.278904] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.477 [2024-12-15 10:44:45.278932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.736 NEW_FUNC[1/670]: 0x43a858 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:56.736 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:56.736 #24 NEW cov: 11541 ft: 11522 corp: 2/125b lim: 320 exec/s: 0 rss: 68Mb L: 124/124 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:56.736 [2024-12-15 10:44:45.579616] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.736 [2024-12-15 10:44:45.579649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.736 #30 NEW cov: 11654 ft: 11928 corp: 3/212b lim: 320 exec/s: 0 rss: 69Mb L: 87/124 MS: 1 EraseBytes- 00:06:56.736 [2024-12-15 10:44:45.619661] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.736 [2024-12-15 10:44:45.619690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.736 #31 NEW cov: 11660 ft: 12207 corp: 4/300b lim: 320 exec/s: 0 rss: 69Mb L: 88/124 MS: 1 InsertByte- 00:06:56.736 [2024-12-15 10:44:45.659876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:e2e2e2e2 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.736 [2024-12-15 10:44:45.659903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.736 [2024-12-15 10:44:45.659965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.736 [2024-12-15 10:44:45.659979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.736 NEW_FUNC[1/1]: 0x12c5228 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2016 00:06:56.736 #33 NEW cov: 11798 ft: 12798 corp: 5/460b lim: 320 exec/s: 0 rss: 69Mb L: 160/160 MS: 2 InsertRepeatedBytes-CrossOver- 00:06:56.736 [2024-12-15 10:44:45.699955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:e2e2e2e2 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffdeffff 00:06:56.736 [2024-12-15 10:44:45.699980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.736 [2024-12-15 10:44:45.700043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.736 [2024-12-15 10:44:45.700057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.736 #44 NEW cov: 11798 ft: 12900 corp: 6/620b lim: 320 exec/s: 0 rss: 69Mb L: 160/160 MS: 1 ChangeByte- 00:06:56.736 [2024-12-15 10:44:45.740015] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.736 [2024-12-15 10:44:45.740040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.995 #45 NEW cov: 11798 ft: 13095 corp: 7/744b lim: 320 exec/s: 0 rss: 69Mb L: 124/160 MS: 1 ChangeBinInt- 00:06:56.995 [2024-12-15 10:44:45.780198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:e2e2e2e2 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.995 [2024-12-15 10:44:45.780223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.995 [2024-12-15 10:44:45.780282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.995 [2024-12-15 10:44:45.780296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.995 #46 NEW cov: 11798 ft: 13155 corp: 8/904b lim: 320 exec/s: 0 rss: 69Mb L: 160/160 MS: 1 ChangeByte- 00:06:56.995 [2024-12-15 10:44:45.820322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:e2e2e2e2 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffdeffff 00:06:56.995 [2024-12-15 10:44:45.820348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.995 [2024-12-15 10:44:45.820409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffff20 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.995 [2024-12-15 10:44:45.820428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.995 #47 NEW cov: 11798 ft: 13183 corp: 9/1065b lim: 320 exec/s: 0 rss: 69Mb L: 161/161 MS: 1 InsertByte- 00:06:56.995 [2024-12-15 10:44:45.860678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:e2e2e2e2 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffdeffff 00:06:56.995 [2024-12-15 10:44:45.860702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.995 [2024-12-15 10:44:45.860764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffff20 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xff0a0ae2e2e2e2e2 00:06:56.995 [2024-12-15 10:44:45.860778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.995 [2024-12-15 10:44:45.860838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.995 [2024-12-15 10:44:45.860852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.995 [2024-12-15 10:44:45.860913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:7 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffe2e2e2e2e2 00:06:56.995 [2024-12-15 10:44:45.860927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.995 #48 NEW cov: 11798 ft: 13573 corp: 10/1375b lim: 320 exec/s: 0 rss: 69Mb L: 310/310 MS: 1 CopyPart- 00:06:56.995 [2024-12-15 10:44:45.900503] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.995 [2024-12-15 10:44:45.900527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.995 #51 NEW cov: 11798 ft: 13623 corp: 11/1473b lim: 320 exec/s: 0 rss: 69Mb L: 98/310 MS: 3 CrossOver-InsertByte-CrossOver- 00:06:56.995 [2024-12-15 10:44:45.940591] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffff000000 00:06:56.995 [2024-12-15 10:44:45.940616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.995 #52 NEW cov: 11798 ft: 13671 corp: 12/1571b lim: 320 exec/s: 0 rss: 69Mb L: 98/310 MS: 1 ChangeBinInt- 00:06:56.995 [2024-12-15 10:44:45.980849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:e2e2e2e2 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffff7effdeffff 00:06:56.995 [2024-12-15 10:44:45.980874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.995 [2024-12-15 10:44:45.980935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.995 [2024-12-15 10:44:45.980950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.995 #53 NEW cov: 11798 ft: 13685 corp: 13/1731b lim: 320 exec/s: 0 rss: 69Mb L: 160/310 MS: 1 ChangeByte- 00:06:57.254 [2024-12-15 10:44:46.021056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:e2ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.254 [2024-12-15 10:44:46.021083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.254 [2024-12-15 10:44:46.021146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.254 [2024-12-15 10:44:46.021162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.254 [2024-12-15 10:44:46.021229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.254 [2024-12-15 10:44:46.021242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.254 #54 NEW cov: 11798 ft: 13913 corp: 14/1935b lim: 320 exec/s: 0 rss: 69Mb L: 204/310 MS: 1 CopyPart- 00:06:57.254 [2024-12-15 10:44:46.060984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.254 [2024-12-15 10:44:46.061008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.254 NEW_FUNC[1/1]: 0x16c4058 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:06:57.254 #57 NEW cov: 11811 ft: 14253 corp: 15/2014b lim: 320 exec/s: 0 rss: 69Mb L: 79/310 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:06:57.254 [2024-12-15 10:44:46.101056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9e) qid:0 cid:4 nsid:ff0a0a72 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x1ffffffff 00:06:57.254 [2024-12-15 10:44:46.101081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.254 #62 NEW cov: 11811 ft: 14267 corp: 16/2120b lim: 320 exec/s: 0 rss: 69Mb L: 106/310 MS: 5 CMP-EraseBytes-ChangeByte-ShuffleBytes-CrossOver- DE: "\374\236\017\300r\177\000\000"- 00:06:57.254 [2024-12-15 10:44:46.131240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:e2e2e2e2 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xff67ffffffdeffff 00:06:57.254 [2024-12-15 10:44:46.131264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.254 [2024-12-15 10:44:46.131324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.254 [2024-12-15 10:44:46.131338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.254 #63 NEW cov: 11811 ft: 14273 corp: 17/2281b lim: 320 exec/s: 0 rss: 69Mb L: 161/310 MS: 1 InsertByte- 00:06:57.254 [2024-12-15 10:44:46.171323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:e2e2e2e2 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xff67ffffffdeffff 00:06:57.254 [2024-12-15 10:44:46.171347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.254 [2024-12-15 10:44:46.171407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.254 [2024-12-15 10:44:46.171425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.254 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:57.254 #64 NEW cov: 11834 ft: 14329 corp: 18/2436b lim: 320 exec/s: 0 rss: 69Mb L: 155/310 MS: 1 EraseBytes- 00:06:57.254 [2024-12-15 10:44:46.211495] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffff000000 00:06:57.254 [2024-12-15 10:44:46.211520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.254 [2024-12-15 10:44:46.211582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x6e6e6e6e6e6e6e6e 00:06:57.254 [2024-12-15 10:44:46.211598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.254 #65 NEW cov: 11834 ft: 14343 corp: 19/2591b lim: 320 exec/s: 0 rss: 70Mb L: 155/310 MS: 1 InsertRepeatedBytes- 00:06:57.254 [2024-12-15 10:44:46.251777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:e2ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.254 [2024-12-15 10:44:46.251802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.254 [2024-12-15 10:44:46.251864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.254 [2024-12-15 10:44:46.251878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.254 [2024-12-15 10:44:46.251936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.254 [2024-12-15 10:44:46.251950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.512 #66 NEW cov: 11834 ft: 14357 corp: 20/2799b lim: 320 exec/s: 66 rss: 70Mb L: 208/310 MS: 1 CMP- DE: "\377\377\001\000"- 00:06:57.512 [2024-12-15 10:44:46.291955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:e2ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.512 [2024-12-15 10:44:46.291979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.512 [2024-12-15 10:44:46.292042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.512 [2024-12-15 10:44:46.292055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.512 [2024-12-15 10:44:46.292118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.512 [2024-12-15 10:44:46.292132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.512 [2024-12-15 10:44:46.292190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:7 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.512 [2024-12-15 10:44:46.292203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.512 #67 NEW cov: 11834 ft: 14408 corp: 21/3063b lim: 320 exec/s: 67 rss: 70Mb L: 264/310 MS: 1 InsertRepeatedBytes- 00:06:57.512 [2024-12-15 10:44:46.331785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.512 [2024-12-15 10:44:46.331809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.512 #68 NEW cov: 11834 ft: 14418 corp: 22/3142b lim: 320 exec/s: 68 rss: 70Mb L: 79/310 MS: 1 ShuffleBytes- 00:06:57.512 [2024-12-15 10:44:46.371978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:e2e2e2e2 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xff67ffffffdeffff 00:06:57.512 [2024-12-15 10:44:46.372003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.512 [2024-12-15 10:44:46.372066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.512 [2024-12-15 10:44:46.372083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.512 #69 NEW cov: 11834 ft: 14421 corp: 23/3303b lim: 320 exec/s: 69 rss: 70Mb L: 161/310 MS: 1 ChangeBit- 00:06:57.513 [2024-12-15 10:44:46.411998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:ffff24ff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.513 [2024-12-15 10:44:46.412022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.513 #70 NEW cov: 11834 ft: 14516 corp: 24/3382b lim: 320 exec/s: 70 rss: 70Mb L: 79/310 MS: 1 ChangeByte- 00:06:57.513 [2024-12-15 10:44:46.452166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:e2e2e2e2 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.513 [2024-12-15 10:44:46.452191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.513 [2024-12-15 10:44:46.452255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:5 nsid:e2e2e2e2 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.513 [2024-12-15 10:44:46.452268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.513 #71 NEW cov: 11834 ft: 14580 corp: 25/3512b lim: 320 exec/s: 71 rss: 70Mb L: 130/310 MS: 1 CrossOver- 00:06:57.513 [2024-12-15 10:44:46.492188] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.513 [2024-12-15 10:44:46.492212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.513 #72 NEW cov: 11834 ft: 14590 corp: 26/3636b lim: 320 exec/s: 72 rss: 70Mb L: 124/310 MS: 1 ChangeBinInt- 00:06:57.771 [2024-12-15 10:44:46.532278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9e) qid:0 cid:4 nsid:ff0a0a72 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x1ffffffff 00:06:57.771 [2024-12-15 10:44:46.532303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.771 #73 NEW cov: 11834 ft: 14627 corp: 27/3742b lim: 320 exec/s: 73 rss: 70Mb L: 106/310 MS: 1 PersAutoDict- DE: "\374\236\017\300r\177\000\000"- 00:06:57.771 [2024-12-15 10:44:46.572383] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffff000000 00:06:57.771 [2024-12-15 10:44:46.572408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.771 #74 NEW cov: 11834 ft: 14641 corp: 28/3840b lim: 320 exec/s: 74 rss: 70Mb L: 98/310 MS: 1 ShuffleBytes- 00:06:57.771 [2024-12-15 10:44:46.612606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:e2e2e2e2 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.771 [2024-12-15 10:44:46.612630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.771 [2024-12-15 10:44:46.612695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.771 [2024-12-15 10:44:46.612708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.771 #75 NEW cov: 11834 ft: 14701 corp: 29/4004b lim: 320 exec/s: 75 rss: 70Mb L: 164/310 MS: 1 PersAutoDict- DE: "\377\377\001\000"- 00:06:57.771 [2024-12-15 10:44:46.652747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:e2e2e2e2 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.771 [2024-12-15 10:44:46.652775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.771 [2024-12-15 10:44:46.652838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:5 nsid:e2e20001 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.771 [2024-12-15 10:44:46.652852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.771 #76 NEW cov: 11834 ft: 14709 corp: 30/4134b lim: 320 exec/s: 76 rss: 70Mb L: 130/310 MS: 1 PersAutoDict- DE: "\377\377\001\000"- 00:06:57.771 [2024-12-15 10:44:46.692849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:e2e2e2e2 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.771 [2024-12-15 10:44:46.692873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.771 [2024-12-15 10:44:46.692936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.771 [2024-12-15 10:44:46.692950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.771 #77 NEW cov: 11834 ft: 14722 corp: 31/4298b lim: 320 exec/s: 77 rss: 70Mb L: 164/310 MS: 1 ChangeBinInt- 00:06:57.771 [2024-12-15 10:44:46.732964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:e2e2e2e2 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffdeffff 00:06:57.771 [2024-12-15 10:44:46.732988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.771 [2024-12-15 10:44:46.733052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffff20 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.771 [2024-12-15 10:44:46.733064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.771 #78 NEW cov: 11834 ft: 14731 corp: 32/4459b lim: 320 exec/s: 78 rss: 70Mb L: 161/310 MS: 1 ChangeByte- 00:06:57.771 [2024-12-15 10:44:46.772975] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.771 [2024-12-15 10:44:46.773000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.030 #79 NEW cov: 11834 ft: 14739 corp: 33/4546b lim: 320 exec/s: 79 rss: 70Mb L: 87/310 MS: 1 ShuffleBytes- 00:06:58.030 [2024-12-15 10:44:46.813100] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:58.031 [2024-12-15 10:44:46.813125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.031 #80 NEW cov: 11834 ft: 14765 corp: 34/4670b lim: 320 exec/s: 80 rss: 70Mb L: 124/310 MS: 1 PersAutoDict- DE: "\374\236\017\300r\177\000\000"- 00:06:58.031 [2024-12-15 10:44:46.853564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:58.031 [2024-12-15 10:44:46.853588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.031 [2024-12-15 10:44:46.853650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:58.031 [2024-12-15 10:44:46.853664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.031 [2024-12-15 10:44:46.853725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:6 nsid:e2e2e2e2 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:58.031 [2024-12-15 10:44:46.853741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.031 [2024-12-15 10:44:46.853802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:7 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:58.031 [2024-12-15 10:44:46.853815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.031 #81 NEW cov: 11834 ft: 14866 corp: 35/4952b lim: 320 exec/s: 81 rss: 70Mb L: 282/310 MS: 1 CrossOver- 00:06:58.031 [2024-12-15 10:44:46.893432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.031 [2024-12-15 10:44:46.893456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.031 #82 NEW cov: 11834 ft: 14881 corp: 36/5039b lim: 320 exec/s: 82 rss: 70Mb L: 87/310 MS: 1 CMP- DE: "\000\004\000\000\000\000\000\000"- 00:06:58.031 [2024-12-15 10:44:46.933693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:ffffffff cdw10:0a0ae2e2 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xe2e2e2e2e2e2e2e2 00:06:58.031 [2024-12-15 10:44:46.933717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.031 [2024-12-15 10:44:46.933780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:58.031 [2024-12-15 10:44:46.933794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.031 [2024-12-15 10:44:46.933856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:58.031 [2024-12-15 10:44:46.933870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.031 #83 NEW cov: 11834 ft: 14949 corp: 37/5279b lim: 320 exec/s: 83 rss: 70Mb L: 240/310 MS: 1 EraseBytes- 00:06:58.031 [2024-12-15 10:44:46.973660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.031 [2024-12-15 10:44:46.973684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.031 #84 NEW cov: 11834 ft: 14977 corp: 38/5366b lim: 320 exec/s: 84 rss: 70Mb L: 87/310 MS: 1 PersAutoDict- DE: "\374\236\017\300r\177\000\000"- 00:06:58.031 [2024-12-15 10:44:47.013774] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xfffffffffffffff8 00:06:58.031 [2024-12-15 10:44:47.013798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.031 #85 NEW cov: 11834 ft: 14989 corp: 39/5490b lim: 320 exec/s: 85 rss: 70Mb L: 124/310 MS: 1 ChangeBinInt- 00:06:58.290 [2024-12-15 10:44:47.054067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:4 nsid:e2e2e2e2 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xff67ffffffdeffff 00:06:58.290 [2024-12-15 10:44:47.054092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.290 [2024-12-15 10:44:47.054156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:58.290 [2024-12-15 10:44:47.054170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.290 [2024-12-15 10:44:47.054232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xa0ae2e2e2e2e2e2 00:06:58.290 [2024-12-15 10:44:47.054246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.290 #86 NEW cov: 11834 ft: 15019 corp: 40/5723b lim: 320 exec/s: 86 rss: 70Mb L: 233/310 MS: 1 CopyPart- 00:06:58.290 [2024-12-15 10:44:47.094121] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffff000000 00:06:58.290 [2024-12-15 10:44:47.094147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.290 [2024-12-15 10:44:47.094210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x6e6e6e6e6e6e6e6e 00:06:58.290 [2024-12-15 10:44:47.094224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.290 #87 NEW cov: 11834 ft: 15027 corp: 41/5878b lim: 320 exec/s: 87 rss: 70Mb L: 155/310 MS: 1 ChangeBinInt- 00:06:58.290 [2024-12-15 10:44:47.134125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:ffff1bff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.290 [2024-12-15 10:44:47.134150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.290 #88 NEW cov: 11834 ft: 15032 corp: 42/5957b lim: 320 exec/s: 88 rss: 70Mb L: 79/310 MS: 1 ChangeBinInt- 00:06:58.290 [2024-12-15 10:44:47.174218] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xfffffffffffffff8 00:06:58.290 [2024-12-15 10:44:47.174243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.290 #89 NEW cov: 11834 ft: 15041 corp: 43/6081b lim: 320 exec/s: 89 rss: 70Mb L: 124/310 MS: 1 ShuffleBytes- 00:06:58.290 [2024-12-15 10:44:47.214301] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:58.290 [2024-12-15 10:44:47.214327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.290 #90 NEW cov: 11834 ft: 15048 corp: 44/6205b lim: 320 exec/s: 90 rss: 70Mb L: 124/310 MS: 1 ChangeByte- 00:06:58.290 [2024-12-15 10:44:47.244420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.290 [2024-12-15 10:44:47.244444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.290 #91 NEW cov: 11834 ft: 15096 corp: 45/6292b lim: 320 exec/s: 45 rss: 71Mb L: 87/310 MS: 1 ChangeBinInt- 00:06:58.290 #91 DONE cov: 11834 ft: 15096 corp: 45/6292b lim: 320 exec/s: 45 rss: 71Mb 00:06:58.290 ###### Recommended dictionary. ###### 00:06:58.290 "\374\236\017\300r\177\000\000" # Uses: 3 00:06:58.290 "\377\377\001\000" # Uses: 2 00:06:58.290 "\000\004\000\000\000\000\000\000" # Uses: 0 00:06:58.290 ###### End of recommended dictionary. ###### 00:06:58.290 Done 91 runs in 2 second(s) 00:06:58.550 10:44:47 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_0.conf 00:06:58.550 10:44:47 -- ../common.sh@72 -- # (( i++ )) 00:06:58.550 10:44:47 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:58.550 10:44:47 -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:58.550 10:44:47 -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:58.550 10:44:47 -- nvmf/run.sh@24 -- # local timen=1 00:06:58.550 10:44:47 -- nvmf/run.sh@25 -- # local core=0x1 00:06:58.550 10:44:47 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:58.550 10:44:47 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:58.550 10:44:47 -- nvmf/run.sh@29 -- # printf %02d 1 00:06:58.550 10:44:47 -- nvmf/run.sh@29 -- # port=4401 00:06:58.550 10:44:47 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:58.550 10:44:47 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:58.550 10:44:47 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:58.550 10:44:47 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 -r /var/tmp/spdk1.sock 00:06:58.550 [2024-12-15 10:44:47.424393] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.550 [2024-12-15 10:44:47.424467] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306371 ] 00:06:58.550 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.809 [2024-12-15 10:44:47.603581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.809 [2024-12-15 10:44:47.666575] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:58.809 [2024-12-15 10:44:47.666699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.809 [2024-12-15 10:44:47.724367] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.809 [2024-12-15 10:44:47.740699] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:58.809 INFO: Running with entropic power schedule (0xFF, 100). 00:06:58.809 INFO: Seed: 2607891266 00:06:58.809 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:06:58.809 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:06:58.809 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:58.809 INFO: A corpus is not provided, starting from an empty corpus 00:06:58.809 #2 INITED exec/s: 0 rss: 60Mb 00:06:58.809 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:58.809 This may also happen if the target rejected all inputs we tried so far 00:06:58.809 [2024-12-15 10:44:47.789768] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa21 00:06:58.809 [2024-12-15 10:44:47.789989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.809 [2024-12-15 10:44:47.790018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.809 [2024-12-15 10:44:47.790075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.809 [2024-12-15 10:44:47.790088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.067 NEW_FUNC[1/671]: 0x43b158 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:59.067 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:59.067 #4 NEW cov: 11654 ft: 11654 corp: 2/13b lim: 30 exec/s: 0 rss: 68Mb L: 12/12 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:59.326 [2024-12-15 10:44:48.110975] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:06:59.326 [2024-12-15 10:44:48.111380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.327 [2024-12-15 10:44:48.111437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.327 #5 NEW cov: 11767 ft: 12823 corp: 3/20b lim: 30 exec/s: 0 rss: 68Mb L: 7/12 MS: 1 EraseBytes- 00:06:59.327 [2024-12-15 10:44:48.161057] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.327 [2024-12-15 10:44:48.161204] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.327 [2024-12-15 10:44:48.161348] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.327 [2024-12-15 10:44:48.161728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fefe83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.327 [2024-12-15 10:44:48.161757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.327 [2024-12-15 10:44:48.161875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.327 [2024-12-15 10:44:48.161893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.327 [2024-12-15 10:44:48.162009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.327 [2024-12-15 10:44:48.162025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.327 #10 NEW cov: 11773 ft: 13311 corp: 4/42b lim: 30 exec/s: 0 rss: 68Mb L: 22/22 MS: 5 ChangeBinInt-ShuffleBytes-ChangeBit-CopyPart-InsertRepeatedBytes- 00:06:59.327 [2024-12-15 10:44:48.201174] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:59.327 [2024-12-15 10:44:48.201526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.327 [2024-12-15 10:44:48.201555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.327 #16 NEW cov: 11866 ft: 13616 corp: 5/52b lim: 30 exec/s: 0 rss: 68Mb L: 10/22 MS: 1 InsertRepeatedBytes- 00:06:59.327 [2024-12-15 10:44:48.241438] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.327 [2024-12-15 10:44:48.241612] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:59.327 [2024-12-15 10:44:48.241761] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.327 [2024-12-15 10:44:48.242082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fefe83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.327 [2024-12-15 10:44:48.242109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.327 [2024-12-15 10:44:48.242223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.327 [2024-12-15 10:44:48.242239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.327 [2024-12-15 10:44:48.242349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00008316 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.327 [2024-12-15 10:44:48.242366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.327 #22 NEW cov: 11866 ft: 13684 corp: 6/74b lim: 30 exec/s: 0 rss: 68Mb L: 22/22 MS: 1 ChangeBinInt- 00:06:59.327 [2024-12-15 10:44:48.291390] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (46084) > buf size (4096) 00:06:59.327 [2024-12-15 10:44:48.291738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2d000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.327 [2024-12-15 10:44:48.291766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.327 #23 NEW cov: 11866 ft: 13807 corp: 7/82b lim: 30 exec/s: 0 rss: 68Mb L: 8/22 MS: 1 InsertByte- 00:06:59.586 [2024-12-15 10:44:48.341887] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:06:59.586 [2024-12-15 10:44:48.342044] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.586 [2024-12-15 10:44:48.342193] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.586 [2024-12-15 10:44:48.342341] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.586 [2024-12-15 10:44:48.342495] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa21 00:06:59.586 [2024-12-15 10:44:48.342824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2d000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.586 [2024-12-15 10:44:48.342852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.586 [2024-12-15 10:44:48.342975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.586 [2024-12-15 10:44:48.342990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.586 [2024-12-15 10:44:48.343102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.586 [2024-12-15 10:44:48.343117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.586 [2024-12-15 10:44:48.343237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.586 [2024-12-15 10:44:48.343254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.586 [2024-12-15 10:44:48.343372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.586 [2024-12-15 10:44:48.343390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.586 #24 NEW cov: 11866 ft: 14348 corp: 8/112b lim: 30 exec/s: 0 rss: 68Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:06:59.586 [2024-12-15 10:44:48.391717] ctrlr.c:2547:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (10752) > len (4) 00:06:59.586 [2024-12-15 10:44:48.392089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.586 [2024-12-15 10:44:48.392117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.586 #27 NEW cov: 11872 ft: 14451 corp: 9/118b lim: 30 exec/s: 0 rss: 68Mb L: 6/30 MS: 3 EraseBytes-CMP-InsertByte- DE: "\000\000\000\020"- 00:06:59.586 [2024-12-15 10:44:48.431952] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (12340) > buf size (4096) 00:06:59.586 [2024-12-15 10:44:48.432104] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (12340) > buf size (4096) 00:06:59.586 [2024-12-15 10:44:48.432450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0c0c000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.586 [2024-12-15 10:44:48.432477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.586 [2024-12-15 10:44:48.432598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0c0c000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.586 [2024-12-15 10:44:48.432614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.586 #28 NEW cov: 11872 ft: 14506 corp: 10/135b lim: 30 exec/s: 0 rss: 68Mb L: 17/30 MS: 1 InsertRepeatedBytes- 00:06:59.586 [2024-12-15 10:44:48.471913] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:06:59.586 [2024-12-15 10:44:48.472287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.586 [2024-12-15 10:44:48.472316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.586 #29 NEW cov: 11872 ft: 14530 corp: 11/142b lim: 30 exec/s: 0 rss: 68Mb L: 7/30 MS: 1 ShuffleBytes- 00:06:59.586 [2024-12-15 10:44:48.512062] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:06:59.586 [2024-12-15 10:44:48.512377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.586 [2024-12-15 10:44:48.512405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.586 #30 NEW cov: 11872 ft: 14552 corp: 12/151b lim: 30 exec/s: 0 rss: 68Mb L: 9/30 MS: 1 CopyPart- 00:06:59.586 [2024-12-15 10:44:48.552247] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (12292) > buf size (4096) 00:06:59.586 [2024-12-15 10:44:48.552436] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (12340) > buf size (4096) 00:06:59.586 [2024-12-15 10:44:48.552750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0c00000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.586 [2024-12-15 10:44:48.552778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.586 [2024-12-15 10:44:48.552902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0c0c000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.586 [2024-12-15 10:44:48.552918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.586 #31 NEW cov: 11872 ft: 14632 corp: 13/168b lim: 30 exec/s: 0 rss: 69Mb L: 17/30 MS: 1 ChangeByte- 00:06:59.845 [2024-12-15 10:44:48.602533] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f3f3 00:06:59.845 [2024-12-15 10:44:48.602694] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (249808) > buf size (4096) 00:06:59.845 [2024-12-15 10:44:48.602998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0c0083f4 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.845 [2024-12-15 10:44:48.603026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.845 [2024-12-15 10:44:48.603155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:f3f300f3 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.845 [2024-12-15 10:44:48.603171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.845 #32 NEW cov: 11872 ft: 14643 corp: 14/185b lim: 30 exec/s: 0 rss: 69Mb L: 17/30 MS: 1 ChangeBinInt- 00:06:59.845 [2024-12-15 10:44:48.652739] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.845 [2024-12-15 10:44:48.652893] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:59.845 [2024-12-15 10:44:48.653038] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.845 [2024-12-15 10:44:48.653366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fefe83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.845 [2024-12-15 10:44:48.653396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.845 [2024-12-15 10:44:48.653523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.845 [2024-12-15 10:44:48.653544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.846 [2024-12-15 10:44:48.653664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.846 [2024-12-15 10:44:48.653682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.846 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:59.846 #33 NEW cov: 11895 ft: 14661 corp: 15/207b lim: 30 exec/s: 0 rss: 69Mb L: 22/30 MS: 1 ChangeBit- 00:06:59.846 [2024-12-15 10:44:48.692841] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.846 [2024-12-15 10:44:48.693003] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261128) > buf size (4096) 00:06:59.846 [2024-12-15 10:44:48.693154] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.846 [2024-12-15 10:44:48.693482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fefe83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.846 [2024-12-15 10:44:48.693510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.846 [2024-12-15 10:44:48.693629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.846 [2024-12-15 10:44:48.693645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.846 [2024-12-15 10:44:48.693770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00008304 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.846 [2024-12-15 10:44:48.693788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.846 #34 NEW cov: 11895 ft: 14706 corp: 16/229b lim: 30 exec/s: 0 rss: 69Mb L: 22/30 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\004"- 00:06:59.846 [2024-12-15 10:44:48.732791] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.846 [2024-12-15 10:44:48.732952] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff1e 00:06:59.846 [2024-12-15 10:44:48.733094] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.846 [2024-12-15 10:44:48.733248] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.846 [2024-12-15 10:44:48.733613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fefe83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.846 [2024-12-15 10:44:48.733641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.846 [2024-12-15 10:44:48.733755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.846 [2024-12-15 10:44:48.733771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.846 [2024-12-15 10:44:48.733903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.846 [2024-12-15 10:44:48.733920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.846 [2024-12-15 10:44:48.734040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.846 [2024-12-15 10:44:48.734059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.846 #35 NEW cov: 11895 ft: 14740 corp: 17/255b lim: 30 exec/s: 0 rss: 69Mb L: 26/30 MS: 1 CMP- DE: "\377\377\377\036"- 00:06:59.846 [2024-12-15 10:44:48.773153] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.846 [2024-12-15 10:44:48.773330] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261128) > buf size (4096) 00:06:59.846 [2024-12-15 10:44:48.773504] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (525316) > buf size (4096) 00:06:59.846 [2024-12-15 10:44:48.773663] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.846 [2024-12-15 10:44:48.773993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fefe83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.846 [2024-12-15 10:44:48.774021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.846 [2024-12-15 10:44:48.774137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.846 [2024-12-15 10:44:48.774154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.846 [2024-12-15 10:44:48.774270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:01000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.846 [2024-12-15 10:44:48.774288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.846 [2024-12-15 10:44:48.774405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:04ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.846 [2024-12-15 10:44:48.774426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.846 #36 NEW cov: 11895 ft: 14746 corp: 18/281b lim: 30 exec/s: 36 rss: 69Mb L: 26/30 MS: 1 CMP- DE: "\001\000\000\016"- 00:06:59.846 [2024-12-15 10:44:48.823396] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.846 [2024-12-15 10:44:48.823583] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261128) > buf size (4096) 00:06:59.846 [2024-12-15 10:44:48.823738] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.846 [2024-12-15 10:44:48.823894] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.846 [2024-12-15 10:44:48.824040] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:59.846 [2024-12-15 10:44:48.824393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fefe83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.846 [2024-12-15 10:44:48.824424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.846 [2024-12-15 10:44:48.824544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.846 [2024-12-15 10:44:48.824564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.846 [2024-12-15 10:44:48.824677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:000383dd cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.846 [2024-12-15 10:44:48.824694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.846 [2024-12-15 10:44:48.824814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.846 [2024-12-15 10:44:48.824832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.846 [2024-12-15 10:44:48.824948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.846 [2024-12-15 10:44:48.824972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.846 #37 NEW cov: 11895 ft: 14776 corp: 19/311b lim: 30 exec/s: 37 rss: 69Mb L: 30/30 MS: 1 CMP- DE: "\001\000\000\000\000\000\003\335"- 00:07:00.105 [2024-12-15 10:44:48.863370] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.105 [2024-12-15 10:44:48.863694] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.105 [2024-12-15 10:44:48.864018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fefe83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:48.864046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.105 [2024-12-15 10:44:48.864168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:48.864184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.105 [2024-12-15 10:44:48.864304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:000483ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:48.864319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.105 #38 NEW cov: 11895 ft: 14806 corp: 20/333b lim: 30 exec/s: 38 rss: 69Mb L: 22/30 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\004"- 00:07:00.105 [2024-12-15 10:44:48.903458] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002dff 00:07:00.105 [2024-12-15 10:44:48.903614] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:07:00.105 [2024-12-15 10:44:48.903783] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.105 [2024-12-15 10:44:48.904144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fefe83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:48.904171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.105 [2024-12-15 10:44:48.904296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:48.904313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.105 [2024-12-15 10:44:48.904436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:48.904455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.105 #39 NEW cov: 11895 ft: 14813 corp: 21/355b lim: 30 exec/s: 39 rss: 69Mb L: 22/30 MS: 1 ChangeByte- 00:07:00.105 [2024-12-15 10:44:48.943544] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002dff 00:07:00.105 [2024-12-15 10:44:48.943706] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000000ff 00:07:00.105 [2024-12-15 10:44:48.943853] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.105 [2024-12-15 10:44:48.944184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fefe83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:48.944213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.105 [2024-12-15 10:44:48.944327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:48.944347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.105 [2024-12-15 10:44:48.944469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:48.944486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.105 #45 NEW cov: 11895 ft: 14825 corp: 22/377b lim: 30 exec/s: 45 rss: 69Mb L: 22/30 MS: 1 ChangeByte- 00:07:00.105 [2024-12-15 10:44:48.983667] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.105 [2024-12-15 10:44:48.983824] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.105 [2024-12-15 10:44:48.983983] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.105 [2024-12-15 10:44:48.984335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fefe83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:48.984364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.105 [2024-12-15 10:44:48.984483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:48.984500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.105 [2024-12-15 10:44:48.984615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ff1e8316 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:48.984633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.105 #46 NEW cov: 11895 ft: 14853 corp: 23/399b lim: 30 exec/s: 46 rss: 69Mb L: 22/30 MS: 1 PersAutoDict- DE: "\377\377\377\036"- 00:07:00.105 [2024-12-15 10:44:49.023826] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (12292) > buf size (4096) 00:07:00.105 [2024-12-15 10:44:49.023991] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (12340) > buf size (4096) 00:07:00.105 [2024-12-15 10:44:49.024143] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000038d 00:07:00.105 [2024-12-15 10:44:49.024293] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (719524) > buf size (4096) 00:07:00.105 [2024-12-15 10:44:49.024647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0c00000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:49.024677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.105 [2024-12-15 10:44:49.024795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0c0c000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:49.024813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.105 [2024-12-15 10:44:49.024930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0c0c830c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:49.024949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.105 [2024-12-15 10:44:49.025071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:bea8020c cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:49.025096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.105 #47 NEW cov: 11895 ft: 14884 corp: 24/424b lim: 30 exec/s: 47 rss: 69Mb L: 25/30 MS: 1 CMP- DE: "\377\003\215\276\250\014\212N"- 00:07:00.105 [2024-12-15 10:44:49.063926] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.105 [2024-12-15 10:44:49.064096] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.105 [2024-12-15 10:44:49.064241] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.105 [2024-12-15 10:44:49.064396] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.105 [2024-12-15 10:44:49.064721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:21ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:49.064748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.105 [2024-12-15 10:44:49.064871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:49.064887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.105 [2024-12-15 10:44:49.065010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:49.065029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.105 [2024-12-15 10:44:49.065143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.105 [2024-12-15 10:44:49.065160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.105 #52 NEW cov: 11895 ft: 14889 corp: 25/451b lim: 30 exec/s: 52 rss: 69Mb L: 27/30 MS: 5 InsertByte-InsertByte-ChangeBit-ChangeBit-InsertRepeatedBytes- 00:07:00.105 [2024-12-15 10:44:49.104151] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:07:00.106 [2024-12-15 10:44:49.104308] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.106 [2024-12-15 10:44:49.104478] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.106 [2024-12-15 10:44:49.104621] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.106 [2024-12-15 10:44:49.104772] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa21 00:07:00.106 [2024-12-15 10:44:49.105121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2d000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.106 [2024-12-15 10:44:49.105148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.106 [2024-12-15 10:44:49.105263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.106 [2024-12-15 10:44:49.105279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.106 [2024-12-15 10:44:49.105399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.106 [2024-12-15 10:44:49.105418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.106 [2024-12-15 10:44:49.105521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff00831e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.106 [2024-12-15 10:44:49.105539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.106 [2024-12-15 10:44:49.105664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.106 [2024-12-15 10:44:49.105682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.364 #53 NEW cov: 11895 ft: 14916 corp: 26/481b lim: 30 exec/s: 53 rss: 69Mb L: 30/30 MS: 1 ChangeBinInt- 00:07:00.364 [2024-12-15 10:44:49.154074] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:07:00.364 [2024-12-15 10:44:49.154423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.364 [2024-12-15 10:44:49.154451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.364 #54 NEW cov: 11895 ft: 14921 corp: 27/491b lim: 30 exec/s: 54 rss: 69Mb L: 10/30 MS: 1 ChangeBinInt- 00:07:00.364 [2024-12-15 10:44:49.194479] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002dff 00:07:00.364 [2024-12-15 10:44:49.194647] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xdcff 00:07:00.364 [2024-12-15 10:44:49.194799] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.364 [2024-12-15 10:44:49.194946] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.364 [2024-12-15 10:44:49.195278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fefe83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.364 [2024-12-15 10:44:49.195305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.364 [2024-12-15 10:44:49.195431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:dcdc00dc cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.364 [2024-12-15 10:44:49.195448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.364 [2024-12-15 10:44:49.195563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83fe cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.364 [2024-12-15 10:44:49.195579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.364 [2024-12-15 10:44:49.195702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.364 [2024-12-15 10:44:49.195718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.364 #55 NEW cov: 11895 ft: 14972 corp: 28/518b lim: 30 exec/s: 55 rss: 69Mb L: 27/30 MS: 1 InsertRepeatedBytes- 00:07:00.364 [2024-12-15 10:44:49.234422] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.364 [2024-12-15 10:44:49.234596] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.364 [2024-12-15 10:44:49.234924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.364 [2024-12-15 10:44:49.234950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.364 [2024-12-15 10:44:49.235067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.364 [2024-12-15 10:44:49.235085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.364 #56 NEW cov: 11895 ft: 14982 corp: 29/533b lim: 30 exec/s: 56 rss: 69Mb L: 15/30 MS: 1 EraseBytes- 00:07:00.364 [2024-12-15 10:44:49.274669] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (12292) > buf size (4096) 00:07:00.364 [2024-12-15 10:44:49.274827] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (12340) > buf size (4096) 00:07:00.364 [2024-12-15 10:44:49.274974] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000038d 00:07:00.364 [2024-12-15 10:44:49.275122] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (719524) > buf size (4096) 00:07:00.364 [2024-12-15 10:44:49.275460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0c00000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.364 [2024-12-15 10:44:49.275488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.364 [2024-12-15 10:44:49.275608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0c0c000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.364 [2024-12-15 10:44:49.275625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.364 [2024-12-15 10:44:49.275737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0c0c830c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.364 [2024-12-15 10:44:49.275756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.364 [2024-12-15 10:44:49.275873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:bea8020c cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.364 [2024-12-15 10:44:49.275890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.364 #57 NEW cov: 11895 ft: 15025 corp: 30/558b lim: 30 exec/s: 57 rss: 69Mb L: 25/30 MS: 1 ChangeByte- 00:07:00.364 [2024-12-15 10:44:49.324909] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10fe 00:07:00.364 [2024-12-15 10:44:49.325075] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:00.364 [2024-12-15 10:44:49.325356] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.364 [2024-12-15 10:44:49.325693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fe000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.364 [2024-12-15 10:44:49.325721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.364 [2024-12-15 10:44:49.325841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.364 [2024-12-15 10:44:49.325859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.364 [2024-12-15 10:44:49.325987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.364 [2024-12-15 10:44:49.326003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.364 [2024-12-15 10:44:49.326131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.364 [2024-12-15 10:44:49.326150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.364 #58 NEW cov: 11895 ft: 15044 corp: 31/584b lim: 30 exec/s: 58 rss: 70Mb L: 26/30 MS: 1 PersAutoDict- DE: "\000\000\000\020"- 00:07:00.365 [2024-12-15 10:44:49.374806] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:00.365 [2024-12-15 10:44:49.375131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.365 [2024-12-15 10:44:49.375159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.624 #59 NEW cov: 11895 ft: 15079 corp: 32/590b lim: 30 exec/s: 59 rss: 70Mb L: 6/30 MS: 1 EraseBytes- 00:07:00.624 [2024-12-15 10:44:49.425056] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10ff 00:07:00.624 [2024-12-15 10:44:49.425223] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.624 [2024-12-15 10:44:49.425382] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000000a 00:07:00.624 [2024-12-15 10:44:49.425715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.624 [2024-12-15 10:44:49.425744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.624 [2024-12-15 10:44:49.425857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.624 [2024-12-15 10:44:49.425873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.624 [2024-12-15 10:44:49.425999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.624 [2024-12-15 10:44:49.426016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.624 #60 NEW cov: 11895 ft: 15087 corp: 33/609b lim: 30 exec/s: 60 rss: 70Mb L: 19/30 MS: 1 PersAutoDict- DE: "\000\000\000\020"- 00:07:00.624 [2024-12-15 10:44:49.475130] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10292) > buf size (4096) 00:07:00.624 [2024-12-15 10:44:49.475457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0c0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.624 [2024-12-15 10:44:49.475484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.624 #61 NEW cov: 11895 ft: 15147 corp: 34/615b lim: 30 exec/s: 61 rss: 70Mb L: 6/30 MS: 1 CrossOver- 00:07:00.624 [2024-12-15 10:44:49.515376] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.624 [2024-12-15 10:44:49.515552] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:07:00.624 [2024-12-15 10:44:49.515706] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.624 [2024-12-15 10:44:49.516023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fefe83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.624 [2024-12-15 10:44:49.516049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.624 [2024-12-15 10:44:49.516160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.624 [2024-12-15 10:44:49.516178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.624 [2024-12-15 10:44:49.516302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.624 [2024-12-15 10:44:49.516320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.624 #62 NEW cov: 11895 ft: 15151 corp: 35/637b lim: 30 exec/s: 62 rss: 70Mb L: 22/30 MS: 1 ChangeByte- 00:07:00.624 [2024-12-15 10:44:49.555286] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (33796) > buf size (4096) 00:07:00.624 [2024-12-15 10:44:49.555639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:21000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.624 [2024-12-15 10:44:49.555665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.624 #63 NEW cov: 11895 ft: 15163 corp: 36/645b lim: 30 exec/s: 63 rss: 70Mb L: 8/30 MS: 1 InsertByte- 00:07:00.624 [2024-12-15 10:44:49.595536] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.624 [2024-12-15 10:44:49.595705] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:07:00.624 [2024-12-15 10:44:49.595857] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ffff 00:07:00.624 [2024-12-15 10:44:49.596192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fefe83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.624 [2024-12-15 10:44:49.596219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.624 [2024-12-15 10:44:49.596338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.624 [2024-12-15 10:44:49.596353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.624 [2024-12-15 10:44:49.596473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.624 [2024-12-15 10:44:49.596492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.624 #64 NEW cov: 11895 ft: 15167 corp: 37/667b lim: 30 exec/s: 64 rss: 70Mb L: 22/30 MS: 1 ChangeByte- 00:07:00.624 [2024-12-15 10:44:49.635894] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (12292) > buf size (4096) 00:07:00.624 [2024-12-15 10:44:49.636060] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (12340) > buf size (4096) 00:07:00.624 [2024-12-15 10:44:49.636216] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000038d 00:07:00.624 [2024-12-15 10:44:49.636369] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (719036) > buf size (4096) 00:07:00.624 [2024-12-15 10:44:49.636726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0c00000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.624 [2024-12-15 10:44:49.636754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.624 [2024-12-15 10:44:49.636865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0c0c000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.624 [2024-12-15 10:44:49.636883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.624 [2024-12-15 10:44:49.637001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0c0c830c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.624 [2024-12-15 10:44:49.637018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.884 [2024-12-15 10:44:49.637132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:be2e020c cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.884 [2024-12-15 10:44:49.637151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.884 #65 NEW cov: 11895 ft: 15180 corp: 38/692b lim: 30 exec/s: 65 rss: 70Mb L: 25/30 MS: 1 ChangeByte- 00:07:00.884 [2024-12-15 10:44:49.685905] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.884 [2024-12-15 10:44:49.686059] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.884 [2024-12-15 10:44:49.686196] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.884 [2024-12-15 10:44:49.686345] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.884 [2024-12-15 10:44:49.686512] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003f0a 00:07:00.884 [2024-12-15 10:44:49.686871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:21ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.884 [2024-12-15 10:44:49.686904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.884 [2024-12-15 10:44:49.687019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.884 [2024-12-15 10:44:49.687035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.884 [2024-12-15 10:44:49.687157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.884 [2024-12-15 10:44:49.687174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.884 [2024-12-15 10:44:49.687288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.884 [2024-12-15 10:44:49.687306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.884 [2024-12-15 10:44:49.687420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.884 [2024-12-15 10:44:49.687438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.884 #66 NEW cov: 11895 ft: 15210 corp: 39/722b lim: 30 exec/s: 66 rss: 70Mb L: 30/30 MS: 1 CrossOver- 00:07:00.884 [2024-12-15 10:44:49.735922] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002dff 00:07:00.884 [2024-12-15 10:44:49.736093] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000000ff 00:07:00.884 [2024-12-15 10:44:49.736240] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000076ff 00:07:00.884 [2024-12-15 10:44:49.736559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fefe83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.884 [2024-12-15 10:44:49.736587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.885 [2024-12-15 10:44:49.736710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.885 [2024-12-15 10:44:49.736727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.885 [2024-12-15 10:44:49.736844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.885 [2024-12-15 10:44:49.736862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.885 #67 NEW cov: 11895 ft: 15214 corp: 40/745b lim: 30 exec/s: 67 rss: 70Mb L: 23/30 MS: 1 InsertByte- 00:07:00.885 [2024-12-15 10:44:49.776054] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (12292) > buf size (4096) 00:07:00.885 [2024-12-15 10:44:49.776212] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (12340) > buf size (4096) 00:07:00.885 [2024-12-15 10:44:49.776559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0c00000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.885 [2024-12-15 10:44:49.776587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.885 [2024-12-15 10:44:49.776704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0c0c000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.885 [2024-12-15 10:44:49.776723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.885 #68 NEW cov: 11895 ft: 15286 corp: 41/762b lim: 30 exec/s: 34 rss: 70Mb L: 17/30 MS: 1 ChangeByte- 00:07:00.885 #68 DONE cov: 11895 ft: 15286 corp: 41/762b lim: 30 exec/s: 34 rss: 70Mb 00:07:00.885 ###### Recommended dictionary. ###### 00:07:00.885 "\000\000\000\020" # Uses: 3 00:07:00.885 "\001\000\000\000\000\000\000\004" # Uses: 1 00:07:00.885 "\377\377\377\036" # Uses: 1 00:07:00.885 "\001\000\000\016" # Uses: 0 00:07:00.885 "\001\000\000\000\000\000\003\335" # Uses: 0 00:07:00.885 "\377\003\215\276\250\014\212N" # Uses: 0 00:07:00.885 ###### End of recommended dictionary. ###### 00:07:00.885 Done 68 runs in 2 second(s) 00:07:01.144 10:44:49 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_1.conf 00:07:01.144 10:44:49 -- ../common.sh@72 -- # (( i++ )) 00:07:01.144 10:44:49 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:01.144 10:44:49 -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:01.144 10:44:49 -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:01.144 10:44:49 -- nvmf/run.sh@24 -- # local timen=1 00:07:01.144 10:44:49 -- nvmf/run.sh@25 -- # local core=0x1 00:07:01.144 10:44:49 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:01.144 10:44:49 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:01.144 10:44:49 -- nvmf/run.sh@29 -- # printf %02d 2 00:07:01.144 10:44:49 -- nvmf/run.sh@29 -- # port=4402 00:07:01.144 10:44:49 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:01.144 10:44:49 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:01.144 10:44:49 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:01.144 10:44:49 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 -r /var/tmp/spdk2.sock 00:07:01.144 [2024-12-15 10:44:49.952442] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.144 [2024-12-15 10:44:49.952508] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306748 ] 00:07:01.144 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.144 [2024-12-15 10:44:50.131140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.404 [2024-12-15 10:44:50.201044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:01.404 [2024-12-15 10:44:50.201173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.404 [2024-12-15 10:44:50.259699] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.404 [2024-12-15 10:44:50.276030] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:01.404 INFO: Running with entropic power schedule (0xFF, 100). 00:07:01.404 INFO: Seed: 847940040 00:07:01.404 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:01.404 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:01.404 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:01.404 INFO: A corpus is not provided, starting from an empty corpus 00:07:01.404 #2 INITED exec/s: 0 rss: 61Mb 00:07:01.404 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:01.404 This may also happen if the target rejected all inputs we tried so far 00:07:01.404 [2024-12-15 10:44:50.341456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.404 [2024-12-15 10:44:50.341486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.404 [2024-12-15 10:44:50.341546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.404 [2024-12-15 10:44:50.341563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.664 NEW_FUNC[1/669]: 0x43db78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:01.664 NEW_FUNC[2/669]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:01.664 #8 NEW cov: 11576 ft: 11577 corp: 2/20b lim: 35 exec/s: 0 rss: 68Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:07:01.664 [2024-12-15 10:44:50.662580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-12-15 10:44:50.662616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.664 [2024-12-15 10:44:50.662675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-12-15 10:44:50.662692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.664 [2024-12-15 10:44:50.662750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-12-15 10:44:50.662766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.664 [2024-12-15 10:44:50.662822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-12-15 10:44:50.662838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.664 [2024-12-15 10:44:50.662894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-12-15 10:44:50.662910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.923 NEW_FUNC[1/1]: 0x16b1378 in nvme_qpair_check_enabled /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:637 00:07:01.923 #9 NEW cov: 11693 ft: 12517 corp: 3/55b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:01.923 [2024-12-15 10:44:50.702586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.923 [2024-12-15 10:44:50.702612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.923 [2024-12-15 10:44:50.702683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:3100ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.923 [2024-12-15 10:44:50.702697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.923 [2024-12-15 10:44:50.702748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.923 [2024-12-15 10:44:50.702762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.923 [2024-12-15 10:44:50.702815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.923 [2024-12-15 10:44:50.702829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.923 [2024-12-15 10:44:50.702882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.923 [2024-12-15 10:44:50.702895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.923 #10 NEW cov: 11699 ft: 12888 corp: 4/90b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 ChangeByte- 00:07:01.924 [2024-12-15 10:44:50.742665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.924 [2024-12-15 10:44:50.742690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.924 [2024-12-15 10:44:50.742742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.924 [2024-12-15 10:44:50.742756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.924 [2024-12-15 10:44:50.742808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.924 [2024-12-15 10:44:50.742821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.924 [2024-12-15 10:44:50.742873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.924 [2024-12-15 10:44:50.742885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.924 [2024-12-15 10:44:50.742937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.924 [2024-12-15 10:44:50.742951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.924 #11 NEW cov: 11784 ft: 13116 corp: 5/125b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 CrossOver- 00:07:01.924 [2024-12-15 10:44:50.782428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.924 [2024-12-15 10:44:50.782452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.924 [2024-12-15 10:44:50.782505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45c20045 cdw11:ba00baba SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.924 [2024-12-15 10:44:50.782518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.924 #12 NEW cov: 11784 ft: 13295 corp: 6/144b lim: 35 exec/s: 0 rss: 69Mb L: 19/35 MS: 1 ChangeBinInt- 00:07:01.924 [2024-12-15 10:44:50.822762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.924 [2024-12-15 10:44:50.822787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.924 [2024-12-15 10:44:50.822839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.924 [2024-12-15 10:44:50.822853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.924 [2024-12-15 10:44:50.822906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.924 [2024-12-15 10:44:50.822919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.924 [2024-12-15 10:44:50.822968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.924 [2024-12-15 10:44:50.822981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.924 #13 NEW cov: 11784 ft: 13454 corp: 7/172b lim: 35 exec/s: 0 rss: 69Mb L: 28/35 MS: 1 EraseBytes- 00:07:01.924 [2024-12-15 10:44:50.862625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.924 [2024-12-15 10:44:50.862649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.924 [2024-12-15 10:44:50.862703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:0a004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.924 [2024-12-15 10:44:50.862717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.924 #14 NEW cov: 11784 ft: 13551 corp: 8/191b lim: 35 exec/s: 0 rss: 69Mb L: 19/35 MS: 1 CopyPart- 00:07:01.924 [2024-12-15 10:44:50.902711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.924 [2024-12-15 10:44:50.902735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.924 [2024-12-15 10:44:50.902789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:0a004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.924 [2024-12-15 10:44:50.902803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.924 #15 NEW cov: 11784 ft: 13622 corp: 9/210b lim: 35 exec/s: 0 rss: 69Mb L: 19/35 MS: 1 ChangeBit- 00:07:02.184 [2024-12-15 10:44:50.942742] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:02.184 [2024-12-15 10:44:50.943264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:0100ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:50.943289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:50.943343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:50.943359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:50.943410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:50.943427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:50.943478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:50.943491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:50.943541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:50.943555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.184 #16 NEW cov: 11793 ft: 13662 corp: 10/245b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:02.184 [2024-12-15 10:44:50.983083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:50.983107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:50.983163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:50.983180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:50.983233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:50.983246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.184 #18 NEW cov: 11793 ft: 13857 corp: 11/266b lim: 35 exec/s: 0 rss: 69Mb L: 21/35 MS: 2 CrossOver-CrossOver- 00:07:02.184 [2024-12-15 10:44:51.023322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.023346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:51.023400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.023418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:51.023469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffcc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.023482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:51.023533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.023546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.184 #19 NEW cov: 11793 ft: 13896 corp: 12/295b lim: 35 exec/s: 0 rss: 69Mb L: 29/35 MS: 1 InsertByte- 00:07:02.184 [2024-12-15 10:44:51.063424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.063448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:51.063500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.063514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:51.063564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:4545000a cdw11:45006545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.063576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:51.063627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:0a004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.063640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.184 #20 NEW cov: 11793 ft: 13925 corp: 13/328b lim: 35 exec/s: 0 rss: 69Mb L: 33/35 MS: 1 CopyPart- 00:07:02.184 [2024-12-15 10:44:51.103540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.103563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:51.103618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.103634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:51.103685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.103698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:51.103748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:0a0a0045 cdw11:65004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.103761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.184 #21 NEW cov: 11793 ft: 13949 corp: 14/358b lim: 35 exec/s: 0 rss: 69Mb L: 30/35 MS: 1 CopyPart- 00:07:02.184 [2024-12-15 10:44:51.143639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.143663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:51.143717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.143730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:51.143783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.143796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:51.143847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.143860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.184 #22 NEW cov: 11793 ft: 13969 corp: 15/386b lim: 35 exec/s: 0 rss: 69Mb L: 28/35 MS: 1 ShuffleBytes- 00:07:02.184 [2024-12-15 10:44:51.183778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.183803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.184 [2024-12-15 10:44:51.183855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.184 [2024-12-15 10:44:51.183885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.185 [2024-12-15 10:44:51.183938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.185 [2024-12-15 10:44:51.183951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.185 [2024-12-15 10:44:51.184003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.185 [2024-12-15 10:44:51.184016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.444 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:02.444 #23 NEW cov: 11816 ft: 14006 corp: 16/415b lim: 35 exec/s: 0 rss: 69Mb L: 29/35 MS: 1 ShuffleBytes- 00:07:02.444 [2024-12-15 10:44:51.233582] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:02.444 [2024-12-15 10:44:51.234079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.444 [2024-12-15 10:44:51.234105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.444 [2024-12-15 10:44:51.234157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.444 [2024-12-15 10:44:51.234172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.444 [2024-12-15 10:44:51.234222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.444 [2024-12-15 10:44:51.234236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.444 [2024-12-15 10:44:51.234289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.444 [2024-12-15 10:44:51.234303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.444 [2024-12-15 10:44:51.234352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.444 [2024-12-15 10:44:51.234366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.444 #24 NEW cov: 11816 ft: 14028 corp: 17/450b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:07:02.444 [2024-12-15 10:44:51.273738] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:02.444 [2024-12-15 10:44:51.274041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.444 [2024-12-15 10:44:51.274066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.444 [2024-12-15 10:44:51.274120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.444 [2024-12-15 10:44:51.274134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.444 [2024-12-15 10:44:51.274186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ff000eff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.444 [2024-12-15 10:44:51.274201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.444 [2024-12-15 10:44:51.274254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.444 [2024-12-15 10:44:51.274266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.444 #25 NEW cov: 11816 ft: 14037 corp: 18/478b lim: 35 exec/s: 0 rss: 69Mb L: 28/35 MS: 1 CMP- DE: "\000\000\000\016"- 00:07:02.444 [2024-12-15 10:44:51.314275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.444 [2024-12-15 10:44:51.314298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.444 [2024-12-15 10:44:51.314351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.444 [2024-12-15 10:44:51.314364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.445 [2024-12-15 10:44:51.314423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.445 [2024-12-15 10:44:51.314437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.445 [2024-12-15 10:44:51.314488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.445 [2024-12-15 10:44:51.314501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.445 [2024-12-15 10:44:51.314552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.445 [2024-12-15 10:44:51.314565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.445 #26 NEW cov: 11816 ft: 14088 corp: 19/513b lim: 35 exec/s: 26 rss: 69Mb L: 35/35 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:02.445 [2024-12-15 10:44:51.354302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.445 [2024-12-15 10:44:51.354326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.445 [2024-12-15 10:44:51.354379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:4545001e cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.445 [2024-12-15 10:44:51.354393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.445 [2024-12-15 10:44:51.354447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.445 [2024-12-15 10:44:51.354461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.445 [2024-12-15 10:44:51.354513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:0a0a0045 cdw11:65004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.445 [2024-12-15 10:44:51.354526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.445 #27 NEW cov: 11816 ft: 14105 corp: 20/543b lim: 35 exec/s: 27 rss: 69Mb L: 30/35 MS: 1 ChangeBinInt- 00:07:02.445 [2024-12-15 10:44:51.394382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.445 [2024-12-15 10:44:51.394407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.445 [2024-12-15 10:44:51.394464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.445 [2024-12-15 10:44:51.394478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.445 [2024-12-15 10:44:51.394531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.445 [2024-12-15 10:44:51.394544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.445 [2024-12-15 10:44:51.394596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.445 [2024-12-15 10:44:51.394609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.445 #28 NEW cov: 11816 ft: 14138 corp: 21/571b lim: 35 exec/s: 28 rss: 69Mb L: 28/35 MS: 1 CMP- DE: "\377\377"- 00:07:02.445 [2024-12-15 10:44:51.434136] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:02.445 [2024-12-15 10:44:51.434256] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:02.445 [2024-12-15 10:44:51.434669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:0100ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.445 [2024-12-15 10:44:51.434694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.445 [2024-12-15 10:44:51.434749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.445 [2024-12-15 10:44:51.434764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.445 [2024-12-15 10:44:51.434814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.445 [2024-12-15 10:44:51.434828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.445 [2024-12-15 10:44:51.434879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.445 [2024-12-15 10:44:51.434892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.445 [2024-12-15 10:44:51.434943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.445 [2024-12-15 10:44:51.434956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.445 #29 NEW cov: 11816 ft: 14173 corp: 22/606b lim: 35 exec/s: 29 rss: 69Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:02.704 [2024-12-15 10:44:51.474350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.704 [2024-12-15 10:44:51.474374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.705 [2024-12-15 10:44:51.474431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004d45 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.474445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.705 #30 NEW cov: 11816 ft: 14238 corp: 23/625b lim: 35 exec/s: 30 rss: 69Mb L: 19/35 MS: 1 ChangeBit- 00:07:02.705 [2024-12-15 10:44:51.514502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.514527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.705 [2024-12-15 10:44:51.514581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:4500454d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.514595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.705 #31 NEW cov: 11816 ft: 14248 corp: 24/645b lim: 35 exec/s: 31 rss: 70Mb L: 20/35 MS: 1 InsertByte- 00:07:02.705 [2024-12-15 10:44:51.554609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45450045 cdw11:45004547 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.554634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.705 [2024-12-15 10:44:51.554687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:4500454d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.554706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.705 #32 NEW cov: 11816 ft: 14261 corp: 25/665b lim: 35 exec/s: 32 rss: 70Mb L: 20/35 MS: 1 ChangeBit- 00:07:02.705 [2024-12-15 10:44:51.594902] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:02.705 [2024-12-15 10:44:51.595143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.595168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.705 [2024-12-15 10:44:51.595222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.595235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.705 [2024-12-15 10:44:51.595287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.595300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.705 [2024-12-15 10:44:51.595352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.595366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.705 [2024-12-15 10:44:51.595420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00230000 cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.595435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.705 #33 NEW cov: 11816 ft: 14292 corp: 26/700b lim: 35 exec/s: 33 rss: 70Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:02.705 [2024-12-15 10:44:51.635061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.635086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.705 [2024-12-15 10:44:51.635139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.635153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.705 [2024-12-15 10:44:51.635203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.635216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.705 [2024-12-15 10:44:51.635268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.635283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.705 #34 NEW cov: 11816 ft: 14316 corp: 27/728b lim: 35 exec/s: 34 rss: 70Mb L: 28/35 MS: 1 PersAutoDict- DE: "\377\377"- 00:07:02.705 [2024-12-15 10:44:51.675047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45ff0045 cdw11:4500ff45 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.675073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.705 [2024-12-15 10:44:51.675131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:4d004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.675145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.705 [2024-12-15 10:44:51.675196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.675210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.705 #35 NEW cov: 11816 ft: 14322 corp: 28/749b lim: 35 exec/s: 35 rss: 70Mb L: 21/35 MS: 1 PersAutoDict- DE: "\377\377"- 00:07:02.705 [2024-12-15 10:44:51.715164] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:02.705 [2024-12-15 10:44:51.715487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.715511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.705 [2024-12-15 10:44:51.715566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:bbbb00ff cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.715581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.705 [2024-12-15 10:44:51.715633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00bb cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.715647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.705 [2024-12-15 10:44:51.715697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:ff000eff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.715712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.705 [2024-12-15 10:44:51.715766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.705 [2024-12-15 10:44:51.715779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.964 #36 NEW cov: 11816 ft: 14330 corp: 29/784b lim: 35 exec/s: 36 rss: 70Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:02.965 [2024-12-15 10:44:51.755409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.755438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.965 [2024-12-15 10:44:51.755492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.755506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.965 [2024-12-15 10:44:51.755558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.755571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.965 [2024-12-15 10:44:51.755623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.755635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.965 #37 NEW cov: 11816 ft: 14348 corp: 30/812b lim: 35 exec/s: 37 rss: 70Mb L: 28/35 MS: 1 CopyPart- 00:07:02.965 [2024-12-15 10:44:51.795258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.795282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.965 [2024-12-15 10:44:51.795335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.795349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.965 #38 NEW cov: 11816 ft: 14356 corp: 31/831b lim: 35 exec/s: 38 rss: 70Mb L: 19/35 MS: 1 CopyPart- 00:07:02.965 [2024-12-15 10:44:51.835446] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:02.965 [2024-12-15 10:44:51.835660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.835685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.965 [2024-12-15 10:44:51.835738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:bbbb00ff cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.835752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.965 [2024-12-15 10:44:51.835804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00bb cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.835817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.965 [2024-12-15 10:44:51.835868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:ff000eff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.835883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.965 #39 NEW cov: 11816 ft: 14429 corp: 32/864b lim: 35 exec/s: 39 rss: 70Mb L: 33/35 MS: 1 EraseBytes- 00:07:02.965 [2024-12-15 10:44:51.875498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.875522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.965 [2024-12-15 10:44:51.875574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:0a004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.875587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.965 #40 NEW cov: 11816 ft: 14443 corp: 33/883b lim: 35 exec/s: 40 rss: 70Mb L: 19/35 MS: 1 ShuffleBytes- 00:07:02.965 [2024-12-15 10:44:51.915833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.915858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.965 [2024-12-15 10:44:51.915912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.915925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.965 [2024-12-15 10:44:51.915978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.915994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.965 [2024-12-15 10:44:51.916045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:0a0a0045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.916058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.965 #41 NEW cov: 11816 ft: 14480 corp: 34/915b lim: 35 exec/s: 41 rss: 70Mb L: 32/35 MS: 1 CopyPart- 00:07:02.965 [2024-12-15 10:44:51.955860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:454500ff cdw11:4500ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.955884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.965 [2024-12-15 10:44:51.955936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.955950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.965 [2024-12-15 10:44:51.956003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.965 [2024-12-15 10:44:51.956016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.225 #42 NEW cov: 11816 ft: 14495 corp: 35/937b lim: 35 exec/s: 42 rss: 70Mb L: 22/35 MS: 1 InsertByte- 00:07:03.225 [2024-12-15 10:44:51.996068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:51.996093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:51.996146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:51.996159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:51.996210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00fffe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:51.996223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:51.996274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:51.996287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.225 #43 NEW cov: 11816 ft: 14509 corp: 36/966b lim: 35 exec/s: 43 rss: 70Mb L: 29/35 MS: 1 ChangeBit- 00:07:03.225 [2024-12-15 10:44:52.035932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45450045 cdw11:5b004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.035956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:52.036008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.036022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.225 #44 NEW cov: 11816 ft: 14526 corp: 37/986b lim: 35 exec/s: 44 rss: 70Mb L: 20/35 MS: 1 InsertByte- 00:07:03.225 [2024-12-15 10:44:52.066307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.066331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:52.066383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.066396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:52.066462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.066476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:52.066528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.066542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.225 #45 NEW cov: 11816 ft: 14567 corp: 38/1017b lim: 35 exec/s: 45 rss: 70Mb L: 31/35 MS: 1 EraseBytes- 00:07:03.225 [2024-12-15 10:44:52.106425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.106448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:52.106481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:4545001e cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.106495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:52.106547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.106560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:52.106611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:0a0a0045 cdw11:65004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.106624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.225 #46 NEW cov: 11816 ft: 14575 corp: 39/1047b lim: 35 exec/s: 46 rss: 70Mb L: 30/35 MS: 1 ShuffleBytes- 00:07:03.225 [2024-12-15 10:44:52.146519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.146544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:52.146598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:bbbb00ff cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.146612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:52.146663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:000000bb cdw11:ff000eff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.146676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:52.146720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.146736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.225 #47 NEW cov: 11816 ft: 14593 corp: 40/1075b lim: 35 exec/s: 47 rss: 70Mb L: 28/35 MS: 1 EraseBytes- 00:07:03.225 [2024-12-15 10:44:52.186631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.186655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:52.186709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.186722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:52.186774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.186787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:52.186840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:0a0a0045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.186852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.225 #48 NEW cov: 11816 ft: 14631 corp: 41/1107b lim: 35 exec/s: 48 rss: 70Mb L: 32/35 MS: 1 CopyPart- 00:07:03.225 [2024-12-15 10:44:52.226755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.226780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:52.226832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.226845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:52.226897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.226911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.225 [2024-12-15 10:44:52.226963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffcc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.225 [2024-12-15 10:44:52.226976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.485 #49 NEW cov: 11816 ft: 14651 corp: 42/1141b lim: 35 exec/s: 49 rss: 70Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:07:03.485 [2024-12-15 10:44:52.266525] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:03.485 [2024-12-15 10:44:52.267046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.485 [2024-12-15 10:44:52.267070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.485 [2024-12-15 10:44:52.267124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.485 [2024-12-15 10:44:52.267139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.485 [2024-12-15 10:44:52.267195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.485 [2024-12-15 10:44:52.267209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.485 [2024-12-15 10:44:52.267262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:fffc00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.485 [2024-12-15 10:44:52.267276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.485 [2024-12-15 10:44:52.267329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.485 [2024-12-15 10:44:52.267342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.485 #50 NEW cov: 11816 ft: 14652 corp: 43/1176b lim: 35 exec/s: 50 rss: 70Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:03.485 [2024-12-15 10:44:52.306627] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:03.485 [2024-12-15 10:44:52.307137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:0100ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.485 [2024-12-15 10:44:52.307161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.485 [2024-12-15 10:44:52.307216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.485 [2024-12-15 10:44:52.307232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.485 [2024-12-15 10:44:52.307277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.485 [2024-12-15 10:44:52.307290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.485 [2024-12-15 10:44:52.307341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.485 [2024-12-15 10:44:52.307355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.485 [2024-12-15 10:44:52.307406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.485 [2024-12-15 10:44:52.307423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.486 #51 NEW cov: 11816 ft: 14666 corp: 44/1211b lim: 35 exec/s: 25 rss: 70Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:03.486 #51 DONE cov: 11816 ft: 14666 corp: 44/1211b lim: 35 exec/s: 25 rss: 70Mb 00:07:03.486 ###### Recommended dictionary. ###### 00:07:03.486 "\000\000\000\016" # Uses: 0 00:07:03.486 "\377\377\377\377" # Uses: 0 00:07:03.486 "\377\377" # Uses: 2 00:07:03.486 ###### End of recommended dictionary. ###### 00:07:03.486 Done 51 runs in 2 second(s) 00:07:03.486 10:44:52 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_2.conf 00:07:03.486 10:44:52 -- ../common.sh@72 -- # (( i++ )) 00:07:03.486 10:44:52 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:03.486 10:44:52 -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:03.486 10:44:52 -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:03.486 10:44:52 -- nvmf/run.sh@24 -- # local timen=1 00:07:03.486 10:44:52 -- nvmf/run.sh@25 -- # local core=0x1 00:07:03.486 10:44:52 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:03.486 10:44:52 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:03.486 10:44:52 -- nvmf/run.sh@29 -- # printf %02d 3 00:07:03.486 10:44:52 -- nvmf/run.sh@29 -- # port=4403 00:07:03.486 10:44:52 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:03.486 10:44:52 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:03.486 10:44:52 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:03.486 10:44:52 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 -r /var/tmp/spdk3.sock 00:07:03.486 [2024-12-15 10:44:52.479352] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.486 [2024-12-15 10:44:52.479450] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307285 ] 00:07:03.745 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.745 [2024-12-15 10:44:52.655579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.745 [2024-12-15 10:44:52.718651] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:03.745 [2024-12-15 10:44:52.718773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.005 [2024-12-15 10:44:52.776540] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.005 [2024-12-15 10:44:52.792882] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:04.005 INFO: Running with entropic power schedule (0xFF, 100). 00:07:04.005 INFO: Seed: 3364923644 00:07:04.005 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:04.005 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:04.005 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:04.005 INFO: A corpus is not provided, starting from an empty corpus 00:07:04.005 #2 INITED exec/s: 0 rss: 60Mb 00:07:04.005 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:04.005 This may also happen if the target rejected all inputs we tried so far 00:07:04.264 NEW_FUNC[1/659]: 0x43f858 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:04.264 NEW_FUNC[2/659]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:04.264 #9 NEW cov: 11502 ft: 11501 corp: 2/20b lim: 20 exec/s: 0 rss: 68Mb L: 19/19 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:04.264 #10 NEW cov: 11615 ft: 12229 corp: 3/40b lim: 20 exec/s: 0 rss: 68Mb L: 20/20 MS: 1 CopyPart- 00:07:04.264 #11 NEW cov: 11621 ft: 12475 corp: 4/60b lim: 20 exec/s: 0 rss: 68Mb L: 20/20 MS: 1 CopyPart- 00:07:04.523 #12 NEW cov: 11706 ft: 12829 corp: 5/79b lim: 20 exec/s: 0 rss: 68Mb L: 19/20 MS: 1 CopyPart- 00:07:04.523 #13 NEW cov: 11706 ft: 12990 corp: 6/98b lim: 20 exec/s: 0 rss: 68Mb L: 19/20 MS: 1 ShuffleBytes- 00:07:04.523 #20 NEW cov: 11711 ft: 13427 corp: 7/108b lim: 20 exec/s: 0 rss: 68Mb L: 10/20 MS: 2 CopyPart-CMP- DE: "\377\377\377\377\377\377\377\377"- 00:07:04.523 #21 NEW cov: 11711 ft: 13520 corp: 8/128b lim: 20 exec/s: 0 rss: 68Mb L: 20/20 MS: 1 ShuffleBytes- 00:07:04.523 #22 NEW cov: 11711 ft: 13553 corp: 9/148b lim: 20 exec/s: 0 rss: 68Mb L: 20/20 MS: 1 CopyPart- 00:07:04.523 #23 NEW cov: 11711 ft: 13627 corp: 10/168b lim: 20 exec/s: 0 rss: 69Mb L: 20/20 MS: 1 ShuffleBytes- 00:07:04.523 #24 NEW cov: 11711 ft: 13664 corp: 11/188b lim: 20 exec/s: 0 rss: 69Mb L: 20/20 MS: 1 ChangeBinInt- 00:07:04.781 #25 NEW cov: 11711 ft: 13729 corp: 12/208b lim: 20 exec/s: 0 rss: 69Mb L: 20/20 MS: 1 InsertByte- 00:07:04.781 #26 NEW cov: 11711 ft: 13753 corp: 13/228b lim: 20 exec/s: 0 rss: 69Mb L: 20/20 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:07:04.781 #27 NEW cov: 11711 ft: 13762 corp: 14/248b lim: 20 exec/s: 0 rss: 69Mb L: 20/20 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:07:04.781 #28 NEW cov: 11711 ft: 13809 corp: 15/268b lim: 20 exec/s: 0 rss: 69Mb L: 20/20 MS: 1 ChangeByte- 00:07:04.781 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:04.781 #29 NEW cov: 11734 ft: 13873 corp: 16/288b lim: 20 exec/s: 0 rss: 69Mb L: 20/20 MS: 1 ChangeBinInt- 00:07:04.781 #30 NEW cov: 11734 ft: 13888 corp: 17/308b lim: 20 exec/s: 0 rss: 69Mb L: 20/20 MS: 1 ChangeByte- 00:07:05.039 #31 NEW cov: 11734 ft: 13998 corp: 18/316b lim: 20 exec/s: 0 rss: 69Mb L: 8/20 MS: 1 EraseBytes- 00:07:05.039 #32 NEW cov: 11734 ft: 14022 corp: 19/336b lim: 20 exec/s: 32 rss: 69Mb L: 20/20 MS: 1 ChangeBit- 00:07:05.039 #33 NEW cov: 11734 ft: 14029 corp: 20/356b lim: 20 exec/s: 33 rss: 69Mb L: 20/20 MS: 1 ShuffleBytes- 00:07:05.039 #34 NEW cov: 11734 ft: 14108 corp: 21/367b lim: 20 exec/s: 34 rss: 69Mb L: 11/20 MS: 1 CrossOver- 00:07:05.039 #35 NEW cov: 11734 ft: 14232 corp: 22/386b lim: 20 exec/s: 35 rss: 69Mb L: 19/20 MS: 1 ChangeBinInt- 00:07:05.040 #36 NEW cov: 11734 ft: 14261 corp: 23/406b lim: 20 exec/s: 36 rss: 69Mb L: 20/20 MS: 1 ChangeByte- 00:07:05.040 #37 NEW cov: 11734 ft: 14267 corp: 24/426b lim: 20 exec/s: 37 rss: 70Mb L: 20/20 MS: 1 InsertByte- 00:07:05.298 #38 NEW cov: 11734 ft: 14327 corp: 25/446b lim: 20 exec/s: 38 rss: 70Mb L: 20/20 MS: 1 ChangeBit- 00:07:05.298 [2024-12-15 10:44:54.101974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.298 [2024-12-15 10:44:54.102012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.298 NEW_FUNC[1/17]: 0x111e188 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3224 00:07:05.298 NEW_FUNC[2/17]: 0x111ed08 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3166 00:07:05.298 #39 NEW cov: 11978 ft: 14632 corp: 26/466b lim: 20 exec/s: 39 rss: 70Mb L: 20/20 MS: 1 ChangeBinInt- 00:07:05.298 #42 NEW cov: 11978 ft: 14666 corp: 27/476b lim: 20 exec/s: 42 rss: 70Mb L: 10/20 MS: 3 ChangeBinInt-CopyPart-InsertRepeatedBytes- 00:07:05.298 #43 NEW cov: 11982 ft: 14785 corp: 28/488b lim: 20 exec/s: 43 rss: 70Mb L: 12/20 MS: 1 EraseBytes- 00:07:05.298 #44 NEW cov: 11982 ft: 14832 corp: 29/507b lim: 20 exec/s: 44 rss: 70Mb L: 19/20 MS: 1 CrossOver- 00:07:05.298 #45 NEW cov: 11982 ft: 14845 corp: 30/522b lim: 20 exec/s: 45 rss: 70Mb L: 15/20 MS: 1 CrossOver- 00:07:05.557 #46 NEW cov: 11982 ft: 14859 corp: 31/540b lim: 20 exec/s: 46 rss: 70Mb L: 18/20 MS: 1 CrossOver- 00:07:05.557 #47 NEW cov: 11982 ft: 14862 corp: 32/559b lim: 20 exec/s: 47 rss: 70Mb L: 19/20 MS: 1 ChangeBit- 00:07:05.557 #48 NEW cov: 11982 ft: 14872 corp: 33/578b lim: 20 exec/s: 48 rss: 70Mb L: 19/20 MS: 1 EraseBytes- 00:07:05.557 #49 NEW cov: 11982 ft: 14935 corp: 34/597b lim: 20 exec/s: 49 rss: 70Mb L: 19/20 MS: 1 CopyPart- 00:07:05.557 #50 NEW cov: 11982 ft: 14964 corp: 35/617b lim: 20 exec/s: 50 rss: 70Mb L: 20/20 MS: 1 ChangeBit- 00:07:05.557 #52 NEW cov: 11982 ft: 14975 corp: 36/632b lim: 20 exec/s: 52 rss: 70Mb L: 15/20 MS: 2 CrossOver-InsertByte- 00:07:05.816 #53 NEW cov: 11982 ft: 14978 corp: 37/652b lim: 20 exec/s: 53 rss: 70Mb L: 20/20 MS: 1 ShuffleBytes- 00:07:05.816 #54 NEW cov: 11982 ft: 14987 corp: 38/672b lim: 20 exec/s: 54 rss: 70Mb L: 20/20 MS: 1 CopyPart- 00:07:05.816 #55 NEW cov: 11982 ft: 15000 corp: 39/692b lim: 20 exec/s: 55 rss: 70Mb L: 20/20 MS: 1 CopyPart- 00:07:05.816 [2024-12-15 10:44:54.693716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.816 [2024-12-15 10:44:54.693744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.816 NEW_FUNC[1/2]: 0x1279a68 in nvmf_transport_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:773 00:07:05.816 NEW_FUNC[2/2]: 0x129ab28 in nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3493 00:07:05.816 #56 NEW cov: 12039 ft: 15141 corp: 40/712b lim: 20 exec/s: 56 rss: 70Mb L: 20/20 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:05.816 #57 NEW cov: 12039 ft: 15185 corp: 41/724b lim: 20 exec/s: 57 rss: 70Mb L: 12/20 MS: 1 ChangeBinInt- 00:07:05.816 #58 NEW cov: 12039 ft: 15196 corp: 42/739b lim: 20 exec/s: 58 rss: 70Mb L: 15/20 MS: 1 ChangeByte- 00:07:06.075 #59 NEW cov: 12039 ft: 15211 corp: 43/759b lim: 20 exec/s: 29 rss: 70Mb L: 20/20 MS: 1 ChangeBinInt- 00:07:06.075 #59 DONE cov: 12039 ft: 15211 corp: 43/759b lim: 20 exec/s: 29 rss: 70Mb 00:07:06.075 ###### Recommended dictionary. ###### 00:07:06.075 "\377\377\377\377\377\377\377\377" # Uses: 2 00:07:06.075 "\001\000\000\000\000\000\000\000" # Uses: 0 00:07:06.075 ###### End of recommended dictionary. ###### 00:07:06.075 Done 59 runs in 2 second(s) 00:07:06.075 10:44:54 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_3.conf 00:07:06.075 10:44:54 -- ../common.sh@72 -- # (( i++ )) 00:07:06.075 10:44:54 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:06.075 10:44:54 -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:06.075 10:44:54 -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:06.075 10:44:54 -- nvmf/run.sh@24 -- # local timen=1 00:07:06.075 10:44:54 -- nvmf/run.sh@25 -- # local core=0x1 00:07:06.075 10:44:54 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:06.075 10:44:54 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:06.075 10:44:54 -- nvmf/run.sh@29 -- # printf %02d 4 00:07:06.075 10:44:54 -- nvmf/run.sh@29 -- # port=4404 00:07:06.075 10:44:54 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:06.075 10:44:54 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:06.075 10:44:54 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:06.075 10:44:54 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 -r /var/tmp/spdk4.sock 00:07:06.075 [2024-12-15 10:44:54.982185] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.075 [2024-12-15 10:44:54.982236] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307698 ] 00:07:06.075 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.333 [2024-12-15 10:44:55.161925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.333 [2024-12-15 10:44:55.228229] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:06.333 [2024-12-15 10:44:55.228370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.333 [2024-12-15 10:44:55.286065] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.333 [2024-12-15 10:44:55.302369] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:06.333 INFO: Running with entropic power schedule (0xFF, 100). 00:07:06.333 INFO: Seed: 1578955183 00:07:06.334 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:06.334 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:06.334 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:06.334 INFO: A corpus is not provided, starting from an empty corpus 00:07:06.334 #2 INITED exec/s: 0 rss: 60Mb 00:07:06.334 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:06.334 This may also happen if the target rejected all inputs we tried so far 00:07:06.591 [2024-12-15 10:44:55.348205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.591 [2024-12-15 10:44:55.348234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.591 [2024-12-15 10:44:55.348287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.591 [2024-12-15 10:44:55.348301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.592 [2024-12-15 10:44:55.348357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.592 [2024-12-15 10:44:55.348371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.592 [2024-12-15 10:44:55.348425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.592 [2024-12-15 10:44:55.348439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.592 [2024-12-15 10:44:55.348491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.592 [2024-12-15 10:44:55.348504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.851 NEW_FUNC[1/671]: 0x440958 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:06.851 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:06.851 #18 NEW cov: 11601 ft: 11602 corp: 2/36b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:06.851 [2024-12-15 10:44:55.648969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.649000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.649054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.649068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.649119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.649132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.649182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.649195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.649246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.649259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.851 #19 NEW cov: 11714 ft: 12040 corp: 3/71b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ChangeBit- 00:07:06.851 [2024-12-15 10:44:55.699015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.699040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.699093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.699106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.699156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.699172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.699223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.699235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.699284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.699297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.851 #20 NEW cov: 11720 ft: 12206 corp: 4/106b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 CrossOver- 00:07:06.851 [2024-12-15 10:44:55.739118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.739144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.739198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.739212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.739265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.739278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.739329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.739342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.739396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.739408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.851 #21 NEW cov: 11805 ft: 12439 corp: 5/141b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:06.851 [2024-12-15 10:44:55.779208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.779233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.779284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.779298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.779349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.779361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.779412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.779431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.779482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.779495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.851 #22 NEW cov: 11805 ft: 12603 corp: 6/176b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:06.851 [2024-12-15 10:44:55.819360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.819384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.819430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.819444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.819495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.819508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.819558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.851 [2024-12-15 10:44:55.819572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.851 [2024-12-15 10:44:55.819622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.852 [2024-12-15 10:44:55.819635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.852 #23 NEW cov: 11805 ft: 12759 corp: 7/211b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ChangeBit- 00:07:06.852 [2024-12-15 10:44:55.859500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.852 [2024-12-15 10:44:55.859525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.852 [2024-12-15 10:44:55.859569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.852 [2024-12-15 10:44:55.859582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.852 [2024-12-15 10:44:55.859634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.852 [2024-12-15 10:44:55.859647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.852 [2024-12-15 10:44:55.859699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.852 [2024-12-15 10:44:55.859712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.852 [2024-12-15 10:44:55.859762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.852 [2024-12-15 10:44:55.859775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.111 #24 NEW cov: 11805 ft: 12798 corp: 8/246b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:07.111 [2024-12-15 10:44:55.899125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.111 [2024-12-15 10:44:55.899149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.111 [2024-12-15 10:44:55.899201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.111 [2024-12-15 10:44:55.899214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.111 #25 NEW cov: 11805 ft: 13267 corp: 9/265b lim: 35 exec/s: 0 rss: 69Mb L: 19/35 MS: 1 EraseBytes- 00:07:07.111 [2024-12-15 10:44:55.939722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.111 [2024-12-15 10:44:55.939746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.111 [2024-12-15 10:44:55.939816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.111 [2024-12-15 10:44:55.939830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.111 [2024-12-15 10:44:55.939884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.111 [2024-12-15 10:44:55.939897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.111 [2024-12-15 10:44:55.939931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.111 [2024-12-15 10:44:55.939943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.111 [2024-12-15 10:44:55.939994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.111 [2024-12-15 10:44:55.940008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.111 #26 NEW cov: 11805 ft: 13353 corp: 10/300b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:07.111 [2024-12-15 10:44:55.979539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.111 [2024-12-15 10:44:55.979563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.111 [2024-12-15 10:44:55.979616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.111 [2024-12-15 10:44:55.979629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.111 [2024-12-15 10:44:55.979680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.111 [2024-12-15 10:44:55.979693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.111 #27 NEW cov: 11805 ft: 13589 corp: 11/323b lim: 35 exec/s: 0 rss: 69Mb L: 23/35 MS: 1 EraseBytes- 00:07:07.111 [2024-12-15 10:44:56.019804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.111 [2024-12-15 10:44:56.019831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.112 [2024-12-15 10:44:56.019883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.112 [2024-12-15 10:44:56.019896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.112 [2024-12-15 10:44:56.019948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.112 [2024-12-15 10:44:56.019961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.112 [2024-12-15 10:44:56.020014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.112 [2024-12-15 10:44:56.020026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.112 #28 NEW cov: 11805 ft: 13663 corp: 12/354b lim: 35 exec/s: 0 rss: 69Mb L: 31/35 MS: 1 EraseBytes- 00:07:07.112 [2024-12-15 10:44:56.059933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.112 [2024-12-15 10:44:56.059956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.112 [2024-12-15 10:44:56.060011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.112 [2024-12-15 10:44:56.060025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.112 [2024-12-15 10:44:56.060077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.112 [2024-12-15 10:44:56.060090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.112 [2024-12-15 10:44:56.060142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.112 [2024-12-15 10:44:56.060154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.112 #29 NEW cov: 11805 ft: 13692 corp: 13/387b lim: 35 exec/s: 0 rss: 70Mb L: 33/35 MS: 1 CrossOver- 00:07:07.112 [2024-12-15 10:44:56.100010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00001200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.112 [2024-12-15 10:44:56.100033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.112 [2024-12-15 10:44:56.100085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.112 [2024-12-15 10:44:56.100098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.112 [2024-12-15 10:44:56.100152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.112 [2024-12-15 10:44:56.100166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.112 [2024-12-15 10:44:56.100218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.112 [2024-12-15 10:44:56.100231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.371 #30 NEW cov: 11805 ft: 13717 corp: 14/420b lim: 35 exec/s: 0 rss: 70Mb L: 33/35 MS: 1 ChangeBinInt- 00:07:07.371 [2024-12-15 10:44:56.140419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:003f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.371 [2024-12-15 10:44:56.140443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.371 [2024-12-15 10:44:56.140496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.371 [2024-12-15 10:44:56.140510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.371 [2024-12-15 10:44:56.140563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.371 [2024-12-15 10:44:56.140576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.371 [2024-12-15 10:44:56.140629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.371 [2024-12-15 10:44:56.140641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.371 [2024-12-15 10:44:56.140692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.371 [2024-12-15 10:44:56.140705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.371 #31 NEW cov: 11805 ft: 13737 corp: 15/455b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 ChangeByte- 00:07:07.371 [2024-12-15 10:44:56.180331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.371 [2024-12-15 10:44:56.180355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.372 [2024-12-15 10:44:56.180408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.180426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.372 [2024-12-15 10:44:56.180478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.180491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.372 [2024-12-15 10:44:56.180544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.180557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.372 #32 NEW cov: 11805 ft: 13758 corp: 16/484b lim: 35 exec/s: 0 rss: 70Mb L: 29/35 MS: 1 EraseBytes- 00:07:07.372 [2024-12-15 10:44:56.220427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.220452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.372 [2024-12-15 10:44:56.220505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.220518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.372 [2024-12-15 10:44:56.220577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.220590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.372 [2024-12-15 10:44:56.220641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.220654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.372 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:07.372 #33 NEW cov: 11828 ft: 13848 corp: 17/518b lim: 35 exec/s: 0 rss: 70Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:07:07.372 [2024-12-15 10:44:56.260607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.260633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.372 [2024-12-15 10:44:56.260688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.260701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.372 [2024-12-15 10:44:56.260752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.260765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.372 [2024-12-15 10:44:56.260817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.260829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.372 #34 NEW cov: 11828 ft: 13859 corp: 18/549b lim: 35 exec/s: 0 rss: 70Mb L: 31/35 MS: 1 ChangeByte- 00:07:07.372 [2024-12-15 10:44:56.300405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.300436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.372 [2024-12-15 10:44:56.300490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.300503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.372 #35 NEW cov: 11828 ft: 13974 corp: 19/565b lim: 35 exec/s: 0 rss: 70Mb L: 16/35 MS: 1 EraseBytes- 00:07:07.372 [2024-12-15 10:44:56.340568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.340594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.372 [2024-12-15 10:44:56.340647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.340661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.372 #36 NEW cov: 11828 ft: 13981 corp: 20/581b lim: 35 exec/s: 36 rss: 70Mb L: 16/35 MS: 1 ShuffleBytes- 00:07:07.372 [2024-12-15 10:44:56.381150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.381179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.372 [2024-12-15 10:44:56.381233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.381246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.372 [2024-12-15 10:44:56.381298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.381312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.372 [2024-12-15 10:44:56.381366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.381379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.372 [2024-12-15 10:44:56.381436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.372 [2024-12-15 10:44:56.381450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.631 #37 NEW cov: 11828 ft: 13990 corp: 21/616b lim: 35 exec/s: 37 rss: 70Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:07.631 [2024-12-15 10:44:56.420762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.631 [2024-12-15 10:44:56.420787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.631 [2024-12-15 10:44:56.420841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.631 [2024-12-15 10:44:56.420855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.632 #38 NEW cov: 11828 ft: 14000 corp: 22/635b lim: 35 exec/s: 38 rss: 70Mb L: 19/35 MS: 1 ShuffleBytes- 00:07:07.632 [2024-12-15 10:44:56.461160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.461184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.632 [2024-12-15 10:44:56.461254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.461268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.632 [2024-12-15 10:44:56.461320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00104000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.461333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.632 [2024-12-15 10:44:56.461386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.461399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.632 #39 NEW cov: 11828 ft: 14017 corp: 23/669b lim: 35 exec/s: 39 rss: 70Mb L: 34/35 MS: 1 ChangeBit- 00:07:07.632 [2024-12-15 10:44:56.501439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.501468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.632 [2024-12-15 10:44:56.501522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.501535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.632 [2024-12-15 10:44:56.501588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.501602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.632 [2024-12-15 10:44:56.501653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.501666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.632 [2024-12-15 10:44:56.501718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.501731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.632 #40 NEW cov: 11828 ft: 14027 corp: 24/704b lim: 35 exec/s: 40 rss: 70Mb L: 35/35 MS: 1 CopyPart- 00:07:07.632 [2024-12-15 10:44:56.541370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.541395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.632 [2024-12-15 10:44:56.541450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.541463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.632 [2024-12-15 10:44:56.541515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.541528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.632 [2024-12-15 10:44:56.541579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.541592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.632 #41 NEW cov: 11828 ft: 14036 corp: 25/732b lim: 35 exec/s: 41 rss: 70Mb L: 28/35 MS: 1 EraseBytes- 00:07:07.632 [2024-12-15 10:44:56.581325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.581349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.632 [2024-12-15 10:44:56.581402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:7c7c0000 cdw11:7c7c0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.581420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.632 [2024-12-15 10:44:56.581473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00007c00 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.581489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.632 #42 NEW cov: 11828 ft: 14068 corp: 26/757b lim: 35 exec/s: 42 rss: 70Mb L: 25/35 MS: 1 InsertRepeatedBytes- 00:07:07.632 [2024-12-15 10:44:56.621453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.621477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.632 [2024-12-15 10:44:56.621532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.621545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.632 [2024-12-15 10:44:56.621597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.632 [2024-12-15 10:44:56.621610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.892 #43 NEW cov: 11828 ft: 14091 corp: 27/778b lim: 35 exec/s: 43 rss: 70Mb L: 21/35 MS: 1 EraseBytes- 00:07:07.892 [2024-12-15 10:44:56.661883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.892 [2024-12-15 10:44:56.661907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.892 [2024-12-15 10:44:56.661961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.892 [2024-12-15 10:44:56.661975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.892 [2024-12-15 10:44:56.662027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.892 [2024-12-15 10:44:56.662040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.892 [2024-12-15 10:44:56.662092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.892 [2024-12-15 10:44:56.662104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.892 [2024-12-15 10:44:56.662155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.892 [2024-12-15 10:44:56.662168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.892 #44 NEW cov: 11828 ft: 14093 corp: 28/813b lim: 35 exec/s: 44 rss: 70Mb L: 35/35 MS: 1 CMP- DE: "\002\000"- 00:07:07.892 [2024-12-15 10:44:56.691978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.892 [2024-12-15 10:44:56.692003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.892 [2024-12-15 10:44:56.692056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.892 [2024-12-15 10:44:56.692070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.892 [2024-12-15 10:44:56.692122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.892 [2024-12-15 10:44:56.692138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.892 [2024-12-15 10:44:56.692190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.892 [2024-12-15 10:44:56.692202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.892 [2024-12-15 10:44:56.692253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.892 [2024-12-15 10:44:56.692266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.892 #45 NEW cov: 11828 ft: 14145 corp: 29/848b lim: 35 exec/s: 45 rss: 70Mb L: 35/35 MS: 1 CopyPart- 00:07:07.892 [2024-12-15 10:44:56.731933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.892 [2024-12-15 10:44:56.731957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.892 [2024-12-15 10:44:56.732010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.892 [2024-12-15 10:44:56.732023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.892 [2024-12-15 10:44:56.732056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.892 [2024-12-15 10:44:56.732069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.892 [2024-12-15 10:44:56.732121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.892 [2024-12-15 10:44:56.732133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.892 #46 NEW cov: 11828 ft: 14151 corp: 30/881b lim: 35 exec/s: 46 rss: 70Mb L: 33/35 MS: 1 ChangeBit- 00:07:07.892 [2024-12-15 10:44:56.772024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00310a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.892 [2024-12-15 10:44:56.772048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.892 [2024-12-15 10:44:56.772101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.892 [2024-12-15 10:44:56.772115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.892 [2024-12-15 10:44:56.772166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.892 [2024-12-15 10:44:56.772179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.893 [2024-12-15 10:44:56.772231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.893 [2024-12-15 10:44:56.772243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.893 #47 NEW cov: 11828 ft: 14157 corp: 31/912b lim: 35 exec/s: 47 rss: 70Mb L: 31/35 MS: 1 ChangeByte- 00:07:07.893 [2024-12-15 10:44:56.812319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.893 [2024-12-15 10:44:56.812347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.893 [2024-12-15 10:44:56.812421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.893 [2024-12-15 10:44:56.812435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.893 [2024-12-15 10:44:56.812487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.893 [2024-12-15 10:44:56.812500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.893 [2024-12-15 10:44:56.812554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.893 [2024-12-15 10:44:56.812567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.893 [2024-12-15 10:44:56.812616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.893 [2024-12-15 10:44:56.812630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.893 #48 NEW cov: 11828 ft: 14163 corp: 32/947b lim: 35 exec/s: 48 rss: 70Mb L: 35/35 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:07.893 [2024-12-15 10:44:56.852443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.893 [2024-12-15 10:44:56.852467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.893 [2024-12-15 10:44:56.852519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00230002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.893 [2024-12-15 10:44:56.852532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.893 [2024-12-15 10:44:56.852582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.893 [2024-12-15 10:44:56.852596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.893 [2024-12-15 10:44:56.852648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.893 [2024-12-15 10:44:56.852660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.893 [2024-12-15 10:44:56.852711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.893 [2024-12-15 10:44:56.852724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.893 #49 NEW cov: 11828 ft: 14175 corp: 33/982b lim: 35 exec/s: 49 rss: 70Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:07.893 [2024-12-15 10:44:56.892532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.893 [2024-12-15 10:44:56.892556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.893 [2024-12-15 10:44:56.892610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00003b00 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.893 [2024-12-15 10:44:56.892623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.893 [2024-12-15 10:44:56.892695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.893 [2024-12-15 10:44:56.892708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.893 [2024-12-15 10:44:56.892759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.893 [2024-12-15 10:44:56.892772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.893 [2024-12-15 10:44:56.892823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.893 [2024-12-15 10:44:56.892836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.153 #50 NEW cov: 11828 ft: 14188 corp: 34/1017b lim: 35 exec/s: 50 rss: 70Mb L: 35/35 MS: 1 ChangeByte- 00:07:08.153 [2024-12-15 10:44:56.932655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:56.932679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.153 [2024-12-15 10:44:56.932731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:56.932745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.153 [2024-12-15 10:44:56.932782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:56.932796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.153 [2024-12-15 10:44:56.932848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:56.932861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.153 [2024-12-15 10:44:56.932912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:56.932925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.153 #51 NEW cov: 11828 ft: 14205 corp: 35/1052b lim: 35 exec/s: 51 rss: 70Mb L: 35/35 MS: 1 CopyPart- 00:07:08.153 [2024-12-15 10:44:56.972667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a00 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:56.972692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.153 [2024-12-15 10:44:56.972745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fd00ffff cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:56.972758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.153 [2024-12-15 10:44:56.972812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:56.972825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.153 [2024-12-15 10:44:56.972876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:56.972892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.153 #52 NEW cov: 11828 ft: 14214 corp: 36/1080b lim: 35 exec/s: 52 rss: 70Mb L: 28/35 MS: 1 ChangeBinInt- 00:07:08.153 [2024-12-15 10:44:57.012742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:57.012766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.153 [2024-12-15 10:44:57.012819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:57.012832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.153 [2024-12-15 10:44:57.012882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:40004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:57.012895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.153 [2024-12-15 10:44:57.012946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:57.012958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.153 #53 NEW cov: 11828 ft: 14227 corp: 37/1111b lim: 35 exec/s: 53 rss: 70Mb L: 31/35 MS: 1 ChangeByte- 00:07:08.153 [2024-12-15 10:44:57.052537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:57.052561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.153 [2024-12-15 10:44:57.052615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:57.052628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.153 #54 NEW cov: 11828 ft: 14238 corp: 38/1130b lim: 35 exec/s: 54 rss: 70Mb L: 19/35 MS: 1 ShuffleBytes- 00:07:08.153 [2024-12-15 10:44:57.093036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:57.093060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.153 [2024-12-15 10:44:57.093127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:57.093141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.153 [2024-12-15 10:44:57.093193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:57.093206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.153 [2024-12-15 10:44:57.093257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:08000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:57.093270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.153 #55 NEW cov: 11828 ft: 14241 corp: 39/1160b lim: 35 exec/s: 55 rss: 70Mb L: 30/35 MS: 1 CopyPart- 00:07:08.153 [2024-12-15 10:44:57.133135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00001200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:57.133159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.153 [2024-12-15 10:44:57.133210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:57.133223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.153 [2024-12-15 10:44:57.133274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:10000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:57.133287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.153 [2024-12-15 10:44:57.133339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.153 [2024-12-15 10:44:57.133351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.153 #56 NEW cov: 11828 ft: 14257 corp: 40/1193b lim: 35 exec/s: 56 rss: 70Mb L: 33/35 MS: 1 CrossOver- 00:07:08.413 [2024-12-15 10:44:57.173253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.413 [2024-12-15 10:44:57.173277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.413 [2024-12-15 10:44:57.173330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c3c300c3 cdw11:c3c30003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.413 [2024-12-15 10:44:57.173343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.413 [2024-12-15 10:44:57.173397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c300c3c3 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.413 [2024-12-15 10:44:57.173410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.413 [2024-12-15 10:44:57.173484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.413 [2024-12-15 10:44:57.173497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.413 #57 NEW cov: 11828 ft: 14276 corp: 41/1221b lim: 35 exec/s: 57 rss: 70Mb L: 28/35 MS: 1 InsertRepeatedBytes- 00:07:08.413 [2024-12-15 10:44:57.213371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.413 [2024-12-15 10:44:57.213395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.413 [2024-12-15 10:44:57.213451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.413 [2024-12-15 10:44:57.213464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.413 [2024-12-15 10:44:57.213516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.413 [2024-12-15 10:44:57.213530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.413 [2024-12-15 10:44:57.213582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.413 [2024-12-15 10:44:57.213597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.413 #58 NEW cov: 11828 ft: 14282 corp: 42/1252b lim: 35 exec/s: 58 rss: 70Mb L: 31/35 MS: 1 InsertRepeatedBytes- 00:07:08.413 [2024-12-15 10:44:57.253652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.413 [2024-12-15 10:44:57.253676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.413 [2024-12-15 10:44:57.253730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.413 [2024-12-15 10:44:57.253743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.413 [2024-12-15 10:44:57.253796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.413 [2024-12-15 10:44:57.253808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.413 [2024-12-15 10:44:57.253861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.413 [2024-12-15 10:44:57.253874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.413 [2024-12-15 10:44:57.253928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.413 [2024-12-15 10:44:57.253940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.413 #59 NEW cov: 11828 ft: 14340 corp: 43/1287b lim: 35 exec/s: 59 rss: 70Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:08.413 [2024-12-15 10:44:57.293470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.413 [2024-12-15 10:44:57.293494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.413 [2024-12-15 10:44:57.293566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:7c7c0000 cdw11:7c7c0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.413 [2024-12-15 10:44:57.293580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.413 [2024-12-15 10:44:57.293633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00007c00 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.414 [2024-12-15 10:44:57.293647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.414 #60 NEW cov: 11828 ft: 14370 corp: 44/1314b lim: 35 exec/s: 60 rss: 70Mb L: 27/35 MS: 1 PersAutoDict- DE: "\002\000"- 00:07:08.414 [2024-12-15 10:44:57.333759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.414 [2024-12-15 10:44:57.333784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.414 [2024-12-15 10:44:57.333838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c3c300c3 cdw11:c3c30003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.414 [2024-12-15 10:44:57.333851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.414 [2024-12-15 10:44:57.333902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c300c3c3 cdw11:c4000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.414 [2024-12-15 10:44:57.333918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.414 [2024-12-15 10:44:57.333970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.414 [2024-12-15 10:44:57.333982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.414 #61 NEW cov: 11828 ft: 14410 corp: 45/1343b lim: 35 exec/s: 30 rss: 70Mb L: 29/35 MS: 1 InsertByte- 00:07:08.414 #61 DONE cov: 11828 ft: 14410 corp: 45/1343b lim: 35 exec/s: 30 rss: 70Mb 00:07:08.414 ###### Recommended dictionary. ###### 00:07:08.414 "\002\000" # Uses: 1 00:07:08.414 "\000\000\000\000" # Uses: 0 00:07:08.414 ###### End of recommended dictionary. ###### 00:07:08.414 Done 61 runs in 2 second(s) 00:07:08.673 10:44:57 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_4.conf 00:07:08.673 10:44:57 -- ../common.sh@72 -- # (( i++ )) 00:07:08.673 10:44:57 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:08.673 10:44:57 -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:08.673 10:44:57 -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:08.673 10:44:57 -- nvmf/run.sh@24 -- # local timen=1 00:07:08.673 10:44:57 -- nvmf/run.sh@25 -- # local core=0x1 00:07:08.673 10:44:57 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:08.673 10:44:57 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:08.673 10:44:57 -- nvmf/run.sh@29 -- # printf %02d 5 00:07:08.673 10:44:57 -- nvmf/run.sh@29 -- # port=4405 00:07:08.673 10:44:57 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:08.673 10:44:57 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:08.673 10:44:57 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:08.673 10:44:57 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 -r /var/tmp/spdk5.sock 00:07:08.673 [2024-12-15 10:44:57.512927] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.673 [2024-12-15 10:44:57.512994] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308119 ] 00:07:08.673 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.932 [2024-12-15 10:44:57.696085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.933 [2024-12-15 10:44:57.759518] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:08.933 [2024-12-15 10:44:57.759641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.933 [2024-12-15 10:44:57.817335] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.933 [2024-12-15 10:44:57.833649] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:08.933 INFO: Running with entropic power schedule (0xFF, 100). 00:07:08.933 INFO: Seed: 4110961325 00:07:08.933 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:08.933 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:08.933 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:08.933 INFO: A corpus is not provided, starting from an empty corpus 00:07:08.933 #2 INITED exec/s: 0 rss: 60Mb 00:07:08.933 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:08.933 This may also happen if the target rejected all inputs we tried so far 00:07:08.933 [2024-12-15 10:44:57.899967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00006000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.933 [2024-12-15 10:44:57.900005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.933 [2024-12-15 10:44:57.900121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.933 [2024-12-15 10:44:57.900138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.191 NEW_FUNC[1/671]: 0x442af8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:09.191 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:09.191 #6 NEW cov: 11612 ft: 11613 corp: 2/27b lim: 45 exec/s: 0 rss: 68Mb L: 26/26 MS: 4 ChangeByte-ShuffleBytes-CMP-InsertRepeatedBytes- DE: "\000\000\000@"- 00:07:09.450 [2024-12-15 10:44:58.220790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.450 [2024-12-15 10:44:58.220830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.450 [2024-12-15 10:44:58.220958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.450 [2024-12-15 10:44:58.220977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.450 #8 NEW cov: 11725 ft: 12210 corp: 3/45b lim: 45 exec/s: 0 rss: 68Mb L: 18/26 MS: 2 CopyPart-CrossOver- 00:07:09.450 [2024-12-15 10:44:58.260676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.450 [2024-12-15 10:44:58.260704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.450 [2024-12-15 10:44:58.260824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.450 [2024-12-15 10:44:58.260841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.450 #10 NEW cov: 11731 ft: 12521 corp: 4/65b lim: 45 exec/s: 0 rss: 68Mb L: 20/26 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:09.450 [2024-12-15 10:44:58.300587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.450 [2024-12-15 10:44:58.300614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.450 #13 NEW cov: 11816 ft: 13481 corp: 5/74b lim: 45 exec/s: 0 rss: 68Mb L: 9/26 MS: 3 PersAutoDict-ShuffleBytes-PersAutoDict- DE: "\000\000\000@"-"\000\000\000@"- 00:07:09.450 [2024-12-15 10:44:58.340995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00006000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.450 [2024-12-15 10:44:58.341023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.450 [2024-12-15 10:44:58.341146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.450 [2024-12-15 10:44:58.341164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.450 #14 NEW cov: 11816 ft: 13558 corp: 6/100b lim: 45 exec/s: 0 rss: 68Mb L: 26/26 MS: 1 CopyPart- 00:07:09.450 [2024-12-15 10:44:58.381104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:20000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.450 [2024-12-15 10:44:58.381134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.450 [2024-12-15 10:44:58.381251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.450 [2024-12-15 10:44:58.381267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.450 #15 NEW cov: 11816 ft: 13663 corp: 7/120b lim: 45 exec/s: 0 rss: 68Mb L: 20/26 MS: 1 ChangeBit- 00:07:09.450 [2024-12-15 10:44:58.421203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:20000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.450 [2024-12-15 10:44:58.421230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.450 [2024-12-15 10:44:58.421342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.450 [2024-12-15 10:44:58.421359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.450 #16 NEW cov: 11816 ft: 13710 corp: 8/140b lim: 45 exec/s: 0 rss: 68Mb L: 20/26 MS: 1 CrossOver- 00:07:09.450 [2024-12-15 10:44:58.461643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00006000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.450 [2024-12-15 10:44:58.461670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.450 [2024-12-15 10:44:58.461786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.450 [2024-12-15 10:44:58.461802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.451 [2024-12-15 10:44:58.461919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.451 [2024-12-15 10:44:58.461936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.710 #17 NEW cov: 11816 ft: 13973 corp: 9/175b lim: 45 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 CopyPart- 00:07:09.710 [2024-12-15 10:44:58.501465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00006060 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.710 [2024-12-15 10:44:58.501492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.710 [2024-12-15 10:44:58.501608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.710 [2024-12-15 10:44:58.501624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.710 #18 NEW cov: 11816 ft: 14051 corp: 10/201b lim: 45 exec/s: 0 rss: 68Mb L: 26/35 MS: 1 CopyPart- 00:07:09.710 [2024-12-15 10:44:58.542116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.710 [2024-12-15 10:44:58.542143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.710 [2024-12-15 10:44:58.542264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.710 [2024-12-15 10:44:58.542281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.710 [2024-12-15 10:44:58.542402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.710 [2024-12-15 10:44:58.542421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.710 [2024-12-15 10:44:58.542540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.710 [2024-12-15 10:44:58.542557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.710 #20 NEW cov: 11816 ft: 14416 corp: 11/237b lim: 45 exec/s: 0 rss: 68Mb L: 36/36 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:09.710 [2024-12-15 10:44:58.582110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00006000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.710 [2024-12-15 10:44:58.582137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.710 [2024-12-15 10:44:58.582263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.711 [2024-12-15 10:44:58.582280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.711 [2024-12-15 10:44:58.582379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.711 [2024-12-15 10:44:58.582396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.711 [2024-12-15 10:44:58.582515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:2c000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.711 [2024-12-15 10:44:58.582533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.711 #21 NEW cov: 11816 ft: 14450 corp: 12/273b lim: 45 exec/s: 0 rss: 68Mb L: 36/36 MS: 1 InsertByte- 00:07:09.711 [2024-12-15 10:44:58.632151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00006000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.711 [2024-12-15 10:44:58.632179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.711 [2024-12-15 10:44:58.632306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.711 [2024-12-15 10:44:58.632322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.711 [2024-12-15 10:44:58.632438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:85ec00f3 cdw11:3ac40004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.711 [2024-12-15 10:44:58.632456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.711 #22 NEW cov: 11816 ft: 14471 corp: 13/307b lim: 45 exec/s: 0 rss: 68Mb L: 34/36 MS: 1 CMP- DE: "\363\205\354:\304\215\004\000"- 00:07:09.711 [2024-12-15 10:44:58.671926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:20000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.711 [2024-12-15 10:44:58.671954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.711 [2024-12-15 10:44:58.672062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.711 [2024-12-15 10:44:58.672081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.711 #23 NEW cov: 11816 ft: 14495 corp: 14/327b lim: 45 exec/s: 0 rss: 68Mb L: 20/36 MS: 1 ShuffleBytes- 00:07:09.711 [2024-12-15 10:44:58.712603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:59596000 cdw11:59590002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.711 [2024-12-15 10:44:58.712628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.711 [2024-12-15 10:44:58.712743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:59005959 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.711 [2024-12-15 10:44:58.712760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.711 [2024-12-15 10:44:58.712875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.711 [2024-12-15 10:44:58.712892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.711 [2024-12-15 10:44:58.713016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:f3850000 cdw11:ec3a0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.711 [2024-12-15 10:44:58.713032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.970 #24 NEW cov: 11816 ft: 14524 corp: 15/371b lim: 45 exec/s: 0 rss: 69Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:07:09.970 [2024-12-15 10:44:58.763135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:59596000 cdw11:59590002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.970 [2024-12-15 10:44:58.763163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.970 [2024-12-15 10:44:58.763290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:59005959 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.970 [2024-12-15 10:44:58.763306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.970 [2024-12-15 10:44:58.763402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.970 [2024-12-15 10:44:58.763421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.970 [2024-12-15 10:44:58.763538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00f30000 cdw11:85ec0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.970 [2024-12-15 10:44:58.763554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.970 [2024-12-15 10:44:58.763667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:00000400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.971 [2024-12-15 10:44:58.763684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:09.971 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:09.971 #25 NEW cov: 11839 ft: 14650 corp: 16/416b lim: 45 exec/s: 0 rss: 69Mb L: 45/45 MS: 1 CopyPart- 00:07:09.971 [2024-12-15 10:44:58.812694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:59596000 cdw11:59590002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.971 [2024-12-15 10:44:58.812721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.971 [2024-12-15 10:44:58.812839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:59005959 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.971 [2024-12-15 10:44:58.812858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.971 [2024-12-15 10:44:58.812969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.971 [2024-12-15 10:44:58.812985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.971 #26 NEW cov: 11839 ft: 14665 corp: 17/448b lim: 45 exec/s: 0 rss: 69Mb L: 32/45 MS: 1 EraseBytes- 00:07:09.971 [2024-12-15 10:44:58.852870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.971 [2024-12-15 10:44:58.852897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.971 [2024-12-15 10:44:58.853012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.971 [2024-12-15 10:44:58.853030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.971 [2024-12-15 10:44:58.853141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.971 [2024-12-15 10:44:58.853158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.971 [2024-12-15 10:44:58.853266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.971 [2024-12-15 10:44:58.853283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.971 #27 NEW cov: 11839 ft: 14713 corp: 18/486b lim: 45 exec/s: 27 rss: 69Mb L: 38/45 MS: 1 CMP- DE: "\001\002"- 00:07:09.971 [2024-12-15 10:44:58.902943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00006000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.971 [2024-12-15 10:44:58.902970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.971 [2024-12-15 10:44:58.903086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.971 [2024-12-15 10:44:58.903102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.971 [2024-12-15 10:44:58.903212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:3aecf300 cdw11:00c40004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.971 [2024-12-15 10:44:58.903229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.971 #28 NEW cov: 11839 ft: 14740 corp: 19/520b lim: 45 exec/s: 28 rss: 69Mb L: 34/45 MS: 1 ShuffleBytes- 00:07:09.971 [2024-12-15 10:44:58.943039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00006000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.971 [2024-12-15 10:44:58.943067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.971 [2024-12-15 10:44:58.943182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.971 [2024-12-15 10:44:58.943198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.971 [2024-12-15 10:44:58.943315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:7f000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.971 [2024-12-15 10:44:58.943334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.971 #29 NEW cov: 11839 ft: 14751 corp: 20/547b lim: 45 exec/s: 29 rss: 69Mb L: 27/45 MS: 1 InsertByte- 00:07:10.231 [2024-12-15 10:44:59.002708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.231 [2024-12-15 10:44:59.002735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.231 #30 NEW cov: 11839 ft: 14796 corp: 21/556b lim: 45 exec/s: 30 rss: 69Mb L: 9/45 MS: 1 ChangeByte- 00:07:10.231 [2024-12-15 10:44:59.063433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:59596000 cdw11:59590002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.231 [2024-12-15 10:44:59.063461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.231 [2024-12-15 10:44:59.063611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:59005959 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.231 [2024-12-15 10:44:59.063628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.231 [2024-12-15 10:44:59.063740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:02000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.231 [2024-12-15 10:44:59.063756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.231 #31 NEW cov: 11839 ft: 14863 corp: 22/588b lim: 45 exec/s: 31 rss: 69Mb L: 32/45 MS: 1 ChangeBit- 00:07:10.231 [2024-12-15 10:44:59.113285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.231 [2024-12-15 10:44:59.113312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.231 [2024-12-15 10:44:59.113436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:dc000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.231 [2024-12-15 10:44:59.113454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.231 #32 NEW cov: 11839 ft: 14886 corp: 23/606b lim: 45 exec/s: 32 rss: 69Mb L: 18/45 MS: 1 ChangeByte- 00:07:10.231 [2024-12-15 10:44:59.154042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff60ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.231 [2024-12-15 10:44:59.154068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.231 [2024-12-15 10:44:59.154187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.231 [2024-12-15 10:44:59.154203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.231 [2024-12-15 10:44:59.154310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.231 [2024-12-15 10:44:59.154326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.231 [2024-12-15 10:44:59.154459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.231 [2024-12-15 10:44:59.154475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.231 #33 NEW cov: 11839 ft: 14997 corp: 24/648b lim: 45 exec/s: 33 rss: 69Mb L: 42/45 MS: 1 InsertRepeatedBytes- 00:07:10.231 [2024-12-15 10:44:59.203289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:3e3e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.231 [2024-12-15 10:44:59.203317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.231 #34 NEW cov: 11839 ft: 14999 corp: 25/662b lim: 45 exec/s: 34 rss: 70Mb L: 14/45 MS: 1 InsertRepeatedBytes- 00:07:10.490 [2024-12-15 10:44:59.263500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.490 [2024-12-15 10:44:59.263528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.490 #35 NEW cov: 11839 ft: 15082 corp: 26/673b lim: 45 exec/s: 35 rss: 70Mb L: 11/45 MS: 1 EraseBytes- 00:07:10.490 [2024-12-15 10:44:59.313677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:01000a0a cdw11:7f9a0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.490 [2024-12-15 10:44:59.313705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.490 #38 NEW cov: 11839 ft: 15089 corp: 27/683b lim: 45 exec/s: 38 rss: 70Mb L: 10/45 MS: 3 ShuffleBytes-CopyPart-CMP- DE: "\001\000\177\232\374\016\214Q"- 00:07:10.490 [2024-12-15 10:44:59.354036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00006060 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.490 [2024-12-15 10:44:59.354063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.490 [2024-12-15 10:44:59.354177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.490 [2024-12-15 10:44:59.354194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.490 #39 NEW cov: 11839 ft: 15095 corp: 28/709b lim: 45 exec/s: 39 rss: 70Mb L: 26/45 MS: 1 CopyPart- 00:07:10.490 [2024-12-15 10:44:59.394188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.490 [2024-12-15 10:44:59.394216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.490 [2024-12-15 10:44:59.394330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:dc000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.491 [2024-12-15 10:44:59.394346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.491 #40 NEW cov: 11839 ft: 15122 corp: 29/728b lim: 45 exec/s: 40 rss: 70Mb L: 19/45 MS: 1 InsertByte- 00:07:10.491 [2024-12-15 10:44:59.434212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.491 [2024-12-15 10:44:59.434240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.491 [2024-12-15 10:44:59.434353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:dc000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.491 [2024-12-15 10:44:59.434370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.491 #41 NEW cov: 11839 ft: 15132 corp: 30/754b lim: 45 exec/s: 41 rss: 70Mb L: 26/45 MS: 1 CrossOver- 00:07:10.491 [2024-12-15 10:44:59.474122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.491 [2024-12-15 10:44:59.474150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.491 #42 NEW cov: 11839 ft: 15138 corp: 31/766b lim: 45 exec/s: 42 rss: 70Mb L: 12/45 MS: 1 InsertByte- 00:07:10.750 [2024-12-15 10:44:59.514599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.750 [2024-12-15 10:44:59.514627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.750 [2024-12-15 10:44:59.514747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00af0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.750 [2024-12-15 10:44:59.514763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.750 #43 NEW cov: 11839 ft: 15147 corp: 32/786b lim: 45 exec/s: 43 rss: 70Mb L: 20/45 MS: 1 ChangeByte- 00:07:10.750 [2024-12-15 10:44:59.554861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00006000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.750 [2024-12-15 10:44:59.554887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.750 [2024-12-15 10:44:59.555007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff00f7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.750 [2024-12-15 10:44:59.555024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.750 [2024-12-15 10:44:59.555140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:3aecf300 cdw11:00c40004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.750 [2024-12-15 10:44:59.555158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.750 #44 NEW cov: 11839 ft: 15188 corp: 33/820b lim: 45 exec/s: 44 rss: 70Mb L: 34/45 MS: 1 ChangeBinInt- 00:07:10.750 [2024-12-15 10:44:59.595335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00006000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.750 [2024-12-15 10:44:59.595361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.750 [2024-12-15 10:44:59.595472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.750 [2024-12-15 10:44:59.595488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.750 [2024-12-15 10:44:59.595598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.750 [2024-12-15 10:44:59.595614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.750 [2024-12-15 10:44:59.595748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:002c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.750 [2024-12-15 10:44:59.595765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.750 #45 NEW cov: 11839 ft: 15202 corp: 34/857b lim: 45 exec/s: 45 rss: 70Mb L: 37/45 MS: 1 InsertByte- 00:07:10.750 [2024-12-15 10:44:59.635176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00006000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.750 [2024-12-15 10:44:59.635203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.750 [2024-12-15 10:44:59.635310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.750 [2024-12-15 10:44:59.635330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.750 [2024-12-15 10:44:59.635439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:3aecf300 cdw11:00c40004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.750 [2024-12-15 10:44:59.635457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.750 #46 NEW cov: 11839 ft: 15222 corp: 35/892b lim: 45 exec/s: 46 rss: 70Mb L: 35/45 MS: 1 InsertByte- 00:07:10.750 [2024-12-15 10:44:59.675199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.750 [2024-12-15 10:44:59.675243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.750 [2024-12-15 10:44:59.675367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:02020202 cdw11:02020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.750 [2024-12-15 10:44:59.675385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.750 [2024-12-15 10:44:59.675494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:02020202 cdw11:02020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.750 [2024-12-15 10:44:59.675512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.750 #47 NEW cov: 11839 ft: 15232 corp: 36/926b lim: 45 exec/s: 47 rss: 70Mb L: 34/45 MS: 1 InsertRepeatedBytes- 00:07:10.750 [2024-12-15 10:44:59.714876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00006000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.750 [2024-12-15 10:44:59.714902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.750 #48 NEW cov: 11839 ft: 15248 corp: 37/941b lim: 45 exec/s: 48 rss: 70Mb L: 15/45 MS: 1 CrossOver- 00:07:10.750 [2024-12-15 10:44:59.754631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:20000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.750 [2024-12-15 10:44:59.754659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.009 #49 NEW cov: 11839 ft: 15256 corp: 38/958b lim: 45 exec/s: 49 rss: 70Mb L: 17/45 MS: 1 EraseBytes- 00:07:11.009 [2024-12-15 10:44:59.795008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:20000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.009 [2024-12-15 10:44:59.795035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.009 [2024-12-15 10:44:59.795165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.009 [2024-12-15 10:44:59.795183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.009 #50 NEW cov: 11839 ft: 15268 corp: 39/978b lim: 45 exec/s: 50 rss: 70Mb L: 20/45 MS: 1 ChangeBinInt- 00:07:11.009 [2024-12-15 10:44:59.835441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.009 [2024-12-15 10:44:59.835467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.009 [2024-12-15 10:44:59.835574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:001a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.009 [2024-12-15 10:44:59.835590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.009 #51 NEW cov: 11839 ft: 15279 corp: 40/996b lim: 45 exec/s: 51 rss: 70Mb L: 18/45 MS: 1 ChangeByte- 00:07:11.009 [2024-12-15 10:44:59.875332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:01000a0a cdw11:7f9a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.009 [2024-12-15 10:44:59.875357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.009 #52 NEW cov: 11839 ft: 15285 corp: 41/1006b lim: 45 exec/s: 26 rss: 70Mb L: 10/45 MS: 1 ChangeBinInt- 00:07:11.009 #52 DONE cov: 11839 ft: 15285 corp: 41/1006b lim: 45 exec/s: 26 rss: 70Mb 00:07:11.009 ###### Recommended dictionary. ###### 00:07:11.009 "\000\000\000@" # Uses: 2 00:07:11.009 "\363\205\354:\304\215\004\000" # Uses: 0 00:07:11.009 "\001\002" # Uses: 0 00:07:11.009 "\001\000\177\232\374\016\214Q" # Uses: 0 00:07:11.009 ###### End of recommended dictionary. ###### 00:07:11.009 Done 52 runs in 2 second(s) 00:07:11.268 10:45:00 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_5.conf 00:07:11.269 10:45:00 -- ../common.sh@72 -- # (( i++ )) 00:07:11.269 10:45:00 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:11.269 10:45:00 -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:11.269 10:45:00 -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:11.269 10:45:00 -- nvmf/run.sh@24 -- # local timen=1 00:07:11.269 10:45:00 -- nvmf/run.sh@25 -- # local core=0x1 00:07:11.269 10:45:00 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:11.269 10:45:00 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:11.269 10:45:00 -- nvmf/run.sh@29 -- # printf %02d 6 00:07:11.269 10:45:00 -- nvmf/run.sh@29 -- # port=4406 00:07:11.269 10:45:00 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:11.269 10:45:00 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:11.269 10:45:00 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:11.269 10:45:00 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 -r /var/tmp/spdk6.sock 00:07:11.269 [2024-12-15 10:45:00.071836] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.269 [2024-12-15 10:45:00.071905] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308677 ] 00:07:11.269 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.269 [2024-12-15 10:45:00.267429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.530 [2024-12-15 10:45:00.333134] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:11.530 [2024-12-15 10:45:00.333276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.530 [2024-12-15 10:45:00.391399] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.530 [2024-12-15 10:45:00.407707] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:11.530 INFO: Running with entropic power schedule (0xFF, 100). 00:07:11.530 INFO: Seed: 2387999077 00:07:11.530 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:11.531 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:11.531 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:11.531 INFO: A corpus is not provided, starting from an empty corpus 00:07:11.531 #2 INITED exec/s: 0 rss: 60Mb 00:07:11.531 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:11.531 This may also happen if the target rejected all inputs we tried so far 00:07:11.531 [2024-12-15 10:45:00.466835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e2e cdw11:00000000 00:07:11.531 [2024-12-15 10:45:00.466866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.864 NEW_FUNC[1/669]: 0x445308 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:11.864 NEW_FUNC[2/669]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:11.865 #4 NEW cov: 11529 ft: 11529 corp: 2/3b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 2 ChangeByte-CopyPart- 00:07:11.865 [2024-12-15 10:45:00.788304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e01 cdw11:00000000 00:07:11.865 [2024-12-15 10:45:00.788359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.865 [2024-12-15 10:45:00.788445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:11.865 [2024-12-15 10:45:00.788472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.865 [2024-12-15 10:45:00.788545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:11.865 [2024-12-15 10:45:00.788570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.865 [2024-12-15 10:45:00.788645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:11.865 [2024-12-15 10:45:00.788669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.865 #8 NEW cov: 11642 ft: 12473 corp: 3/12b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 4 CopyPart-ChangeBit-CrossOver-CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:11.865 [2024-12-15 10:45:00.838018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:07:11.865 [2024-12-15 10:45:00.838045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.865 [2024-12-15 10:45:00.838097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:11.865 [2024-12-15 10:45:00.838111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.865 [2024-12-15 10:45:00.838164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:11.865 [2024-12-15 10:45:00.838177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.865 [2024-12-15 10:45:00.838230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:11.865 [2024-12-15 10:45:00.838244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.149 #10 NEW cov: 11648 ft: 12717 corp: 4/21b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 2 EraseBytes-PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:12.149 [2024-12-15 10:45:00.877797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e2e cdw11:00000000 00:07:12.149 [2024-12-15 10:45:00.877823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.149 #11 NEW cov: 11733 ft: 13074 corp: 5/23b lim: 10 exec/s: 0 rss: 68Mb L: 2/9 MS: 1 CopyPart- 00:07:12.149 [2024-12-15 10:45:00.917942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e2e cdw11:00000000 00:07:12.149 [2024-12-15 10:45:00.917968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.149 #12 NEW cov: 11733 ft: 13199 corp: 6/25b lim: 10 exec/s: 0 rss: 68Mb L: 2/9 MS: 1 CopyPart- 00:07:12.149 [2024-12-15 10:45:00.958063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a2e cdw11:00000000 00:07:12.149 [2024-12-15 10:45:00.958088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.149 #13 NEW cov: 11733 ft: 13256 corp: 7/27b lim: 10 exec/s: 0 rss: 68Mb L: 2/9 MS: 1 ChangeBinInt- 00:07:12.149 [2024-12-15 10:45:00.998499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e01 cdw11:00000000 00:07:12.149 [2024-12-15 10:45:00.998524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.149 [2024-12-15 10:45:00.998575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.149 [2024-12-15 10:45:00.998589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.149 [2024-12-15 10:45:00.998640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.149 [2024-12-15 10:45:00.998653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.149 [2024-12-15 10:45:00.998704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00002000 cdw11:00000000 00:07:12.149 [2024-12-15 10:45:00.998716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.149 #14 NEW cov: 11733 ft: 13359 corp: 8/36b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 ChangeBit- 00:07:12.149 [2024-12-15 10:45:01.038735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a01 cdw11:00000000 00:07:12.149 [2024-12-15 10:45:01.038759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.149 [2024-12-15 10:45:01.038813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.149 [2024-12-15 10:45:01.038826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.149 [2024-12-15 10:45:01.038875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.149 [2024-12-15 10:45:01.038888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.149 [2024-12-15 10:45:01.038937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.149 [2024-12-15 10:45:01.038950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.149 [2024-12-15 10:45:01.039001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000002e cdw11:00000000 00:07:12.149 [2024-12-15 10:45:01.039013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:12.149 #15 NEW cov: 11733 ft: 13445 corp: 9/46b lim: 10 exec/s: 0 rss: 68Mb L: 10/10 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:12.149 [2024-12-15 10:45:01.078728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:07:12.149 [2024-12-15 10:45:01.078754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.149 [2024-12-15 10:45:01.078805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.149 [2024-12-15 10:45:01.078818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.149 [2024-12-15 10:45:01.078873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.149 [2024-12-15 10:45:01.078887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.149 [2024-12-15 10:45:01.078939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000002e cdw11:00000000 00:07:12.149 [2024-12-15 10:45:01.078952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.149 #16 NEW cov: 11733 ft: 13540 corp: 10/54b lim: 10 exec/s: 0 rss: 69Mb L: 8/10 MS: 1 EraseBytes- 00:07:12.149 [2024-12-15 10:45:01.118737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:07:12.149 [2024-12-15 10:45:01.118762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.149 [2024-12-15 10:45:01.118814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.149 [2024-12-15 10:45:01.118828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.149 [2024-12-15 10:45:01.118880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.149 [2024-12-15 10:45:01.118893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.149 #17 NEW cov: 11733 ft: 13782 corp: 11/60b lim: 10 exec/s: 0 rss: 69Mb L: 6/10 MS: 1 EraseBytes- 00:07:12.417 [2024-12-15 10:45:01.158766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.158792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.417 [2024-12-15 10:45:01.158845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.158858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.417 #18 NEW cov: 11733 ft: 13989 corp: 12/65b lim: 10 exec/s: 0 rss: 69Mb L: 5/10 MS: 1 EraseBytes- 00:07:12.417 [2024-12-15 10:45:01.198720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e2a cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.198745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.417 #19 NEW cov: 11733 ft: 14003 corp: 13/67b lim: 10 exec/s: 0 rss: 69Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:12.417 [2024-12-15 10:45:01.239309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff03 cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.239335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.417 [2024-12-15 10:45:01.239388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00008dc5 cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.239402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.417 [2024-12-15 10:45:01.239456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000be96 cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.239469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.417 [2024-12-15 10:45:01.239520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000c5c6 cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.239532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.417 [2024-12-15 10:45:01.239585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00002a2e cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.239598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:12.417 #20 NEW cov: 11733 ft: 14034 corp: 14/77b lim: 10 exec/s: 0 rss: 69Mb L: 10/10 MS: 1 CMP- DE: "\377\003\215\305\276\226\305\306"- 00:07:12.417 [2024-12-15 10:45:01.279293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002eff cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.279318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.417 [2024-12-15 10:45:01.279369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.279382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.417 [2024-12-15 10:45:01.279434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.279447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.417 [2024-12-15 10:45:01.279497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff2e cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.279509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.417 #21 NEW cov: 11733 ft: 14045 corp: 15/85b lim: 10 exec/s: 0 rss: 69Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:07:12.417 [2024-12-15 10:45:01.319179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e20 cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.319203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.417 [2024-12-15 10:45:01.319254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000002a cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.319267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.417 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:12.417 #22 NEW cov: 11756 ft: 14134 corp: 16/89b lim: 10 exec/s: 0 rss: 69Mb L: 4/10 MS: 1 CMP- DE: " \000"- 00:07:12.417 [2024-12-15 10:45:01.359538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff03 cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.359563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.417 [2024-12-15 10:45:01.359616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00008dc5 cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.359629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.417 [2024-12-15 10:45:01.359679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000be96 cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.359692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.417 [2024-12-15 10:45:01.359741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000c5c6 cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.359754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.417 #24 NEW cov: 11756 ft: 14210 corp: 17/98b lim: 10 exec/s: 0 rss: 69Mb L: 9/10 MS: 2 EraseBytes-PersAutoDict- DE: "\377\003\215\305\276\226\305\306"- 00:07:12.417 [2024-12-15 10:45:01.399674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e01 cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.399701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.417 [2024-12-15 10:45:01.399769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.399782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.417 [2024-12-15 10:45:01.399833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.399846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.417 [2024-12-15 10:45:01.399897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.417 [2024-12-15 10:45:01.399909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.417 #25 NEW cov: 11756 ft: 14246 corp: 18/107b lim: 10 exec/s: 0 rss: 69Mb L: 9/10 MS: 1 ChangeBit- 00:07:12.675 [2024-12-15 10:45:01.439639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:07:12.675 [2024-12-15 10:45:01.439664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.675 [2024-12-15 10:45:01.439717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.675 [2024-12-15 10:45:01.439731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.675 [2024-12-15 10:45:01.439781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.675 [2024-12-15 10:45:01.439794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.675 #26 NEW cov: 11756 ft: 14266 corp: 19/113b lim: 10 exec/s: 26 rss: 69Mb L: 6/10 MS: 1 ShuffleBytes- 00:07:12.675 [2024-12-15 10:45:01.479903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a2e cdw11:00000000 00:07:12.675 [2024-12-15 10:45:01.479927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.675 [2024-12-15 10:45:01.479981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002e01 cdw11:00000000 00:07:12.675 [2024-12-15 10:45:01.479994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.675 [2024-12-15 10:45:01.480045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.675 [2024-12-15 10:45:01.480058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.675 [2024-12-15 10:45:01.480109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.675 [2024-12-15 10:45:01.480123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.675 #27 NEW cov: 11756 ft: 14272 corp: 20/122b lim: 10 exec/s: 27 rss: 69Mb L: 9/10 MS: 1 CrossOver- 00:07:12.675 [2024-12-15 10:45:01.519849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:07:12.675 [2024-12-15 10:45:01.519873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.675 [2024-12-15 10:45:01.519925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000f900 cdw11:00000000 00:07:12.676 [2024-12-15 10:45:01.519939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.676 [2024-12-15 10:45:01.519992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.676 [2024-12-15 10:45:01.520005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.676 #28 NEW cov: 11756 ft: 14287 corp: 21/128b lim: 10 exec/s: 28 rss: 69Mb L: 6/10 MS: 1 ChangeBinInt- 00:07:12.676 [2024-12-15 10:45:01.559758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00006a0a cdw11:00000000 00:07:12.676 [2024-12-15 10:45:01.559783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.676 #30 NEW cov: 11756 ft: 14309 corp: 22/130b lim: 10 exec/s: 30 rss: 69Mb L: 2/10 MS: 2 CrossOver-InsertByte- 00:07:12.676 [2024-12-15 10:45:01.599882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e40 cdw11:00000000 00:07:12.676 [2024-12-15 10:45:01.599907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.676 #38 NEW cov: 11756 ft: 14321 corp: 23/132b lim: 10 exec/s: 38 rss: 69Mb L: 2/10 MS: 3 EraseBytes-ShuffleBytes-InsertByte- 00:07:12.676 [2024-12-15 10:45:01.630213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002edb cdw11:00000000 00:07:12.676 [2024-12-15 10:45:01.630237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.676 [2024-12-15 10:45:01.630288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000dbdb cdw11:00000000 00:07:12.676 [2024-12-15 10:45:01.630301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.676 [2024-12-15 10:45:01.630353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000dbdb cdw11:00000000 00:07:12.676 [2024-12-15 10:45:01.630366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.676 #40 NEW cov: 11756 ft: 14329 corp: 24/138b lim: 10 exec/s: 40 rss: 69Mb L: 6/10 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:12.676 [2024-12-15 10:45:01.670592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a01 cdw11:00000000 00:07:12.676 [2024-12-15 10:45:01.670616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.676 [2024-12-15 10:45:01.670668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.676 [2024-12-15 10:45:01.670682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.676 [2024-12-15 10:45:01.670734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.676 [2024-12-15 10:45:01.670746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.676 [2024-12-15 10:45:01.670797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.676 [2024-12-15 10:45:01.670809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.676 [2024-12-15 10:45:01.670860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000002e cdw11:00000000 00:07:12.676 [2024-12-15 10:45:01.670873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:12.935 #41 NEW cov: 11756 ft: 14361 corp: 25/148b lim: 10 exec/s: 41 rss: 69Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:12.935 [2024-12-15 10:45:01.710697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a01 cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.710724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.935 [2024-12-15 10:45:01.710775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.710789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.935 [2024-12-15 10:45:01.710840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.710854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.935 [2024-12-15 10:45:01.710902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.710915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.935 [2024-12-15 10:45:01.710966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000002e cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.710979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:12.935 #42 NEW cov: 11756 ft: 14367 corp: 26/158b lim: 10 exec/s: 42 rss: 69Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:12.935 [2024-12-15 10:45:01.750817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000001fc cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.750842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.935 [2024-12-15 10:45:01.750886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000723a cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.750900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.935 [2024-12-15 10:45:01.750950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004169 cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.750963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.935 [2024-12-15 10:45:01.751013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00003a39 cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.751025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.935 [2024-12-15 10:45:01.751076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00002a2e cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.751089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:12.935 #43 NEW cov: 11756 ft: 14381 corp: 27/168b lim: 10 exec/s: 43 rss: 70Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:12.935 [2024-12-15 10:45:01.790456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e2e cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.790479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.935 #44 NEW cov: 11756 ft: 14409 corp: 28/171b lim: 10 exec/s: 44 rss: 70Mb L: 3/10 MS: 1 CrossOver- 00:07:12.935 [2024-12-15 10:45:01.820648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.820673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.935 [2024-12-15 10:45:01.820726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.820742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.935 #45 NEW cov: 11756 ft: 14413 corp: 29/176b lim: 10 exec/s: 45 rss: 70Mb L: 5/10 MS: 1 ShuffleBytes- 00:07:12.935 [2024-12-15 10:45:01.861036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.861061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.935 [2024-12-15 10:45:01.861113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.861127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.935 [2024-12-15 10:45:01.861177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.861205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.935 [2024-12-15 10:45:01.861257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.861269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.935 #46 NEW cov: 11756 ft: 14427 corp: 30/185b lim: 10 exec/s: 46 rss: 70Mb L: 9/10 MS: 1 ChangeBinInt- 00:07:12.935 [2024-12-15 10:45:01.900870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000dbdb cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.900895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.935 [2024-12-15 10:45:01.900962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000dbdb cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.900976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.935 #47 NEW cov: 11756 ft: 14432 corp: 31/189b lim: 10 exec/s: 47 rss: 70Mb L: 4/10 MS: 1 EraseBytes- 00:07:12.935 [2024-12-15 10:45:01.941028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000dbdb cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.941052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.935 [2024-12-15 10:45:01.941105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000dbdb cdw11:00000000 00:07:12.935 [2024-12-15 10:45:01.941118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.194 #48 NEW cov: 11756 ft: 14439 corp: 32/193b lim: 10 exec/s: 48 rss: 70Mb L: 4/10 MS: 1 CopyPart- 00:07:13.194 [2024-12-15 10:45:01.981389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:13.194 [2024-12-15 10:45:01.981413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.194 [2024-12-15 10:45:01.981478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:13.194 [2024-12-15 10:45:01.981492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.194 [2024-12-15 10:45:01.981542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:13.194 [2024-12-15 10:45:01.981555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.194 [2024-12-15 10:45:01.981606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff2e cdw11:00000000 00:07:13.194 [2024-12-15 10:45:01.981621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.194 #49 NEW cov: 11756 ft: 14542 corp: 33/202b lim: 10 exec/s: 49 rss: 70Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:13.194 [2024-12-15 10:45:02.021204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e2c cdw11:00000000 00:07:13.194 [2024-12-15 10:45:02.021228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.194 [2024-12-15 10:45:02.021279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000002a cdw11:00000000 00:07:13.194 [2024-12-15 10:45:02.021292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.194 #50 NEW cov: 11756 ft: 14546 corp: 34/206b lim: 10 exec/s: 50 rss: 70Mb L: 4/10 MS: 1 ChangeByte- 00:07:13.194 [2024-12-15 10:45:02.061229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e2e cdw11:00000000 00:07:13.194 [2024-12-15 10:45:02.061254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.194 #52 NEW cov: 11756 ft: 14561 corp: 35/208b lim: 10 exec/s: 52 rss: 70Mb L: 2/10 MS: 2 EraseBytes-CopyPart- 00:07:13.194 [2024-12-15 10:45:02.101343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a2e cdw11:00000000 00:07:13.194 [2024-12-15 10:45:02.101367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.194 #53 NEW cov: 11756 ft: 14563 corp: 36/210b lim: 10 exec/s: 53 rss: 70Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:13.194 [2024-12-15 10:45:02.131442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e60 cdw11:00000000 00:07:13.194 [2024-12-15 10:45:02.131466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.194 #54 NEW cov: 11756 ft: 14628 corp: 37/212b lim: 10 exec/s: 54 rss: 70Mb L: 2/10 MS: 1 ChangeBit- 00:07:13.194 [2024-12-15 10:45:02.171561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e2e cdw11:00000000 00:07:13.194 [2024-12-15 10:45:02.171584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.452 [2024-12-15 10:45:02.211936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e2e cdw11:00000000 00:07:13.452 [2024-12-15 10:45:02.211961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.452 [2024-12-15 10:45:02.212013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:13.452 [2024-12-15 10:45:02.212025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.453 [2024-12-15 10:45:02.212076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000002a cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.212089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.453 #56 NEW cov: 11756 ft: 14638 corp: 38/218b lim: 10 exec/s: 56 rss: 70Mb L: 6/10 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:13.453 [2024-12-15 10:45:02.251992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e2e cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.252016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.453 [2024-12-15 10:45:02.252067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.252080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.453 [2024-12-15 10:45:02.252138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000002a cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.252151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.453 #57 NEW cov: 11756 ft: 14644 corp: 39/225b lim: 10 exec/s: 57 rss: 70Mb L: 7/10 MS: 1 InsertByte- 00:07:13.453 [2024-12-15 10:45:02.292265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002eff cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.292289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.453 [2024-12-15 10:45:02.292342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.292355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.453 [2024-12-15 10:45:02.292406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000fff3 cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.292424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.453 [2024-12-15 10:45:02.292477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.292490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.453 #58 NEW cov: 11756 ft: 14659 corp: 40/234b lim: 10 exec/s: 58 rss: 70Mb L: 9/10 MS: 1 InsertByte- 00:07:13.453 [2024-12-15 10:45:02.332409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e20 cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.332440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.453 [2024-12-15 10:45:02.332492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000f5f5 cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.332506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.453 [2024-12-15 10:45:02.332556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000f5f5 cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.332569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.453 [2024-12-15 10:45:02.332621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000002a cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.332633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.453 #59 NEW cov: 11756 ft: 14711 corp: 41/242b lim: 10 exec/s: 59 rss: 70Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:07:13.453 [2024-12-15 10:45:02.372484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff03 cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.372509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.453 [2024-12-15 10:45:02.372579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00008dc5 cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.372593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.453 [2024-12-15 10:45:02.372644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000be96 cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.372657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.453 [2024-12-15 10:45:02.372712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000c5c6 cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.372725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.453 #60 NEW cov: 11756 ft: 14721 corp: 42/250b lim: 10 exec/s: 60 rss: 70Mb L: 8/10 MS: 1 PersAutoDict- DE: "\377\003\215\305\276\226\305\306"- 00:07:13.453 [2024-12-15 10:45:02.412250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e2e cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.412274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.453 #61 NEW cov: 11756 ft: 14782 corp: 43/252b lim: 10 exec/s: 61 rss: 70Mb L: 2/10 MS: 1 CopyPart- 00:07:13.453 [2024-12-15 10:45:02.452601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000002e cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.452625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.453 [2024-12-15 10:45:02.452676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002e00 cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.452689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.453 [2024-12-15 10:45:02.452739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000002a cdw11:00000000 00:07:13.453 [2024-12-15 10:45:02.452752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.713 #62 NEW cov: 11756 ft: 14796 corp: 44/258b lim: 10 exec/s: 31 rss: 70Mb L: 6/10 MS: 1 ShuffleBytes- 00:07:13.713 #62 DONE cov: 11756 ft: 14796 corp: 44/258b lim: 10 exec/s: 31 rss: 70Mb 00:07:13.713 ###### Recommended dictionary. ###### 00:07:13.713 "\001\000\000\000\000\000\000\000" # Uses: 2 00:07:13.713 "\377\003\215\305\276\226\305\306" # Uses: 2 00:07:13.713 " \000" # Uses: 0 00:07:13.713 ###### End of recommended dictionary. ###### 00:07:13.713 Done 62 runs in 2 second(s) 00:07:13.713 10:45:02 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_6.conf 00:07:13.713 10:45:02 -- ../common.sh@72 -- # (( i++ )) 00:07:13.713 10:45:02 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:13.713 10:45:02 -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:13.713 10:45:02 -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:13.713 10:45:02 -- nvmf/run.sh@24 -- # local timen=1 00:07:13.713 10:45:02 -- nvmf/run.sh@25 -- # local core=0x1 00:07:13.713 10:45:02 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:13.713 10:45:02 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:13.713 10:45:02 -- nvmf/run.sh@29 -- # printf %02d 7 00:07:13.713 10:45:02 -- nvmf/run.sh@29 -- # port=4407 00:07:13.713 10:45:02 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:13.713 10:45:02 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:13.713 10:45:02 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:13.713 10:45:02 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 -r /var/tmp/spdk7.sock 00:07:13.713 [2024-12-15 10:45:02.636545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.713 [2024-12-15 10:45:02.636623] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309216 ] 00:07:13.713 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.972 [2024-12-15 10:45:02.891230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.972 [2024-12-15 10:45:02.976434] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:13.972 [2024-12-15 10:45:02.976576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.231 [2024-12-15 10:45:03.034524] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.231 [2024-12-15 10:45:03.050855] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:14.231 INFO: Running with entropic power schedule (0xFF, 100). 00:07:14.231 INFO: Seed: 737016207 00:07:14.231 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:14.231 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:14.231 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:14.231 INFO: A corpus is not provided, starting from an empty corpus 00:07:14.231 #2 INITED exec/s: 0 rss: 60Mb 00:07:14.231 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:14.231 This may also happen if the target rejected all inputs we tried so far 00:07:14.231 [2024-12-15 10:45:03.117664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3a cdw11:00000000 00:07:14.231 [2024-12-15 10:45:03.117701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.490 NEW_FUNC[1/669]: 0x445d08 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:14.490 NEW_FUNC[2/669]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:14.491 #3 NEW cov: 11526 ft: 11530 corp: 2/3b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 InsertByte- 00:07:14.491 [2024-12-15 10:45:03.448263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:14.491 [2024-12-15 10:45:03.448302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.491 [2024-12-15 10:45:03.448421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:14.491 [2024-12-15 10:45:03.448439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.491 [2024-12-15 10:45:03.448560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:14.491 [2024-12-15 10:45:03.448576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.491 [2024-12-15 10:45:03.448681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:14.491 [2024-12-15 10:45:03.448698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.491 #4 NEW cov: 11642 ft: 12542 corp: 3/11b lim: 10 exec/s: 0 rss: 68Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:14.491 [2024-12-15 10:45:03.488309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:14.491 [2024-12-15 10:45:03.488338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.491 [2024-12-15 10:45:03.488456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:14.491 [2024-12-15 10:45:03.488474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.491 [2024-12-15 10:45:03.488587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:14.491 [2024-12-15 10:45:03.488604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.491 [2024-12-15 10:45:03.488720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:14.491 [2024-12-15 10:45:03.488737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.750 #5 NEW cov: 11648 ft: 12784 corp: 4/20b lim: 10 exec/s: 0 rss: 69Mb L: 9/9 MS: 1 CrossOver- 00:07:14.750 [2024-12-15 10:45:03.538181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aad cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.538208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.750 [2024-12-15 10:45:03.538331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000adad cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.538349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.750 [2024-12-15 10:45:03.538477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000adad cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.538494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.750 #6 NEW cov: 11733 ft: 13129 corp: 5/27b lim: 10 exec/s: 0 rss: 69Mb L: 7/9 MS: 1 InsertRepeatedBytes- 00:07:14.750 [2024-12-15 10:45:03.578207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aad cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.578235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.750 [2024-12-15 10:45:03.578350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000adad cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.578366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.750 #7 NEW cov: 11733 ft: 13363 corp: 6/32b lim: 10 exec/s: 0 rss: 69Mb L: 5/9 MS: 1 EraseBytes- 00:07:14.750 [2024-12-15 10:45:03.618634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.618660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.750 [2024-12-15 10:45:03.618778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.618795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.750 [2024-12-15 10:45:03.618913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.618930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.750 [2024-12-15 10:45:03.619038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.619055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.750 #8 NEW cov: 11733 ft: 13471 corp: 7/41b lim: 10 exec/s: 0 rss: 69Mb L: 9/9 MS: 1 ChangeByte- 00:07:14.750 [2024-12-15 10:45:03.669070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aad cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.669097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.750 [2024-12-15 10:45:03.669219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000adad cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.669235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.750 [2024-12-15 10:45:03.669346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000adad cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.669363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.750 [2024-12-15 10:45:03.669476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000adad cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.669492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.750 [2024-12-15 10:45:03.669598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000adad cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.669614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:14.750 #9 NEW cov: 11733 ft: 13570 corp: 8/51b lim: 10 exec/s: 0 rss: 69Mb L: 10/10 MS: 1 CopyPart- 00:07:14.750 [2024-12-15 10:45:03.708520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aad cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.708548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.750 [2024-12-15 10:45:03.708652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000adad cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.708667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.750 #10 NEW cov: 11733 ft: 13591 corp: 9/56b lim: 10 exec/s: 0 rss: 69Mb L: 5/10 MS: 1 ShuffleBytes- 00:07:14.750 [2024-12-15 10:45:03.749059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.749085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.750 [2024-12-15 10:45:03.749204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.749220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.750 [2024-12-15 10:45:03.749322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.749338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.750 [2024-12-15 10:45:03.749465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:14.750 [2024-12-15 10:45:03.749484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.010 #11 NEW cov: 11733 ft: 13680 corp: 10/65b lim: 10 exec/s: 0 rss: 69Mb L: 9/10 MS: 1 ShuffleBytes- 00:07:15.010 [2024-12-15 10:45:03.789250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.789276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:03.789404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000400 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.789425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:03.789540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.789558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:03.789681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.789701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.010 #12 NEW cov: 11733 ft: 13769 corp: 11/74b lim: 10 exec/s: 0 rss: 69Mb L: 9/10 MS: 1 CMP- DE: "\001\004\000\000\000\000\000\000"- 00:07:15.010 [2024-12-15 10:45:03.829182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aac cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.829211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:03.829323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000adad cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.829340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:03.829469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000adad cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.829487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.010 #13 NEW cov: 11733 ft: 13827 corp: 12/81b lim: 10 exec/s: 0 rss: 69Mb L: 7/10 MS: 1 ChangeBit- 00:07:15.010 [2024-12-15 10:45:03.869581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.869609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:03.869728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.869747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:03.869856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.869874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:03.869990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.870007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.010 #14 NEW cov: 11733 ft: 13845 corp: 13/90b lim: 10 exec/s: 0 rss: 69Mb L: 9/10 MS: 1 ShuffleBytes- 00:07:15.010 [2024-12-15 10:45:03.919951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.919979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:03.920091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.920108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:03.920217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.920236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:03.920348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.920364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:03.920480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.920500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.010 #15 NEW cov: 11733 ft: 13924 corp: 14/100b lim: 10 exec/s: 0 rss: 69Mb L: 10/10 MS: 1 CopyPart- 00:07:15.010 [2024-12-15 10:45:03.959884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.959911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:03.960028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.960043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:03.960153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.960170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:03.960298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000040 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:03.960315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.010 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:15.010 #16 NEW cov: 11756 ft: 13943 corp: 15/109b lim: 10 exec/s: 0 rss: 69Mb L: 9/10 MS: 1 ChangeBit- 00:07:15.010 [2024-12-15 10:45:04.009765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:04.009793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:04.009906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:04.009924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:04.010045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:04.010063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.010 [2024-12-15 10:45:04.010182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.010 [2024-12-15 10:45:04.010200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.269 #17 NEW cov: 11756 ft: 13972 corp: 16/118b lim: 10 exec/s: 0 rss: 69Mb L: 9/10 MS: 1 ChangeByte- 00:07:15.269 [2024-12-15 10:45:04.049492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aad cdw11:00000000 00:07:15.269 [2024-12-15 10:45:04.049518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.270 #18 NEW cov: 11756 ft: 14036 corp: 17/121b lim: 10 exec/s: 0 rss: 69Mb L: 3/10 MS: 1 EraseBytes- 00:07:15.270 [2024-12-15 10:45:04.089646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000adad cdw11:00000000 00:07:15.270 [2024-12-15 10:45:04.089675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.270 [2024-12-15 10:45:04.089787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ad0a cdw11:00000000 00:07:15.270 [2024-12-15 10:45:04.089805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.270 #19 NEW cov: 11756 ft: 14051 corp: 18/126b lim: 10 exec/s: 19 rss: 69Mb L: 5/10 MS: 1 ShuffleBytes- 00:07:15.270 [2024-12-15 10:45:04.130518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:15.270 [2024-12-15 10:45:04.130547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.270 [2024-12-15 10:45:04.130659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.270 [2024-12-15 10:45:04.130675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.270 [2024-12-15 10:45:04.130788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.270 [2024-12-15 10:45:04.130806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.270 [2024-12-15 10:45:04.130922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.270 [2024-12-15 10:45:04.130938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.270 [2024-12-15 10:45:04.131037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:07:15.270 [2024-12-15 10:45:04.131056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.270 #20 NEW cov: 11756 ft: 14075 corp: 19/136b lim: 10 exec/s: 20 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:07:15.270 [2024-12-15 10:45:04.179609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000b5d cdw11:00000000 00:07:15.270 [2024-12-15 10:45:04.179637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.270 #22 NEW cov: 11756 ft: 14090 corp: 20/138b lim: 10 exec/s: 22 rss: 70Mb L: 2/10 MS: 2 ChangeBit-InsertByte- 00:07:15.270 [2024-12-15 10:45:04.220715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aad cdw11:00000000 00:07:15.270 [2024-12-15 10:45:04.220744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.270 [2024-12-15 10:45:04.220861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000adad cdw11:00000000 00:07:15.270 [2024-12-15 10:45:04.220878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.270 [2024-12-15 10:45:04.220992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ad0a cdw11:00000000 00:07:15.270 [2024-12-15 10:45:04.221010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.270 [2024-12-15 10:45:04.221127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000adad cdw11:00000000 00:07:15.270 [2024-12-15 10:45:04.221145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.270 [2024-12-15 10:45:04.221258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000adad cdw11:00000000 00:07:15.270 [2024-12-15 10:45:04.221277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.270 #23 NEW cov: 11756 ft: 14130 corp: 21/148b lim: 10 exec/s: 23 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:07:15.270 [2024-12-15 10:45:04.280528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ad0a cdw11:00000000 00:07:15.270 [2024-12-15 10:45:04.280569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.270 [2024-12-15 10:45:04.280707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000adad cdw11:00000000 00:07:15.270 [2024-12-15 10:45:04.280726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.270 [2024-12-15 10:45:04.280845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000adad cdw11:00000000 00:07:15.270 [2024-12-15 10:45:04.280862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.529 #24 NEW cov: 11756 ft: 14230 corp: 22/154b lim: 10 exec/s: 24 rss: 70Mb L: 6/10 MS: 1 CopyPart- 00:07:15.529 [2024-12-15 10:45:04.320233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3a cdw11:00000000 00:07:15.529 [2024-12-15 10:45:04.320260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.529 #25 NEW cov: 11756 ft: 14274 corp: 23/157b lim: 10 exec/s: 25 rss: 70Mb L: 3/10 MS: 1 InsertByte- 00:07:15.529 [2024-12-15 10:45:04.371002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:15.529 [2024-12-15 10:45:04.371030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.529 [2024-12-15 10:45:04.371140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000400 cdw11:00000000 00:07:15.529 [2024-12-15 10:45:04.371157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.529 [2024-12-15 10:45:04.371273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.529 [2024-12-15 10:45:04.371291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.529 [2024-12-15 10:45:04.371399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.529 [2024-12-15 10:45:04.371419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.529 #26 NEW cov: 11756 ft: 14291 corp: 24/166b lim: 10 exec/s: 26 rss: 70Mb L: 9/10 MS: 1 PersAutoDict- DE: "\001\004\000\000\000\000\000\000"- 00:07:15.529 [2024-12-15 10:45:04.411132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:15.529 [2024-12-15 10:45:04.411158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.529 [2024-12-15 10:45:04.411267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.529 [2024-12-15 10:45:04.411283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.529 [2024-12-15 10:45:04.411398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.530 [2024-12-15 10:45:04.411417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.530 [2024-12-15 10:45:04.411547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.530 [2024-12-15 10:45:04.411563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.530 #27 NEW cov: 11756 ft: 14311 corp: 25/175b lim: 10 exec/s: 27 rss: 70Mb L: 9/10 MS: 1 ShuffleBytes- 00:07:15.530 [2024-12-15 10:45:04.451435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:15.530 [2024-12-15 10:45:04.451478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.530 [2024-12-15 10:45:04.451597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:15.530 [2024-12-15 10:45:04.451614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.530 [2024-12-15 10:45:04.451725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffad cdw11:00000000 00:07:15.530 [2024-12-15 10:45:04.451744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.530 [2024-12-15 10:45:04.451858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000adad cdw11:00000000 00:07:15.530 [2024-12-15 10:45:04.451873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.530 [2024-12-15 10:45:04.451981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000aad cdw11:00000000 00:07:15.530 [2024-12-15 10:45:04.452000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.530 #28 NEW cov: 11756 ft: 14323 corp: 26/185b lim: 10 exec/s: 28 rss: 70Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:15.530 [2024-12-15 10:45:04.501693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:15.530 [2024-12-15 10:45:04.501721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.530 [2024-12-15 10:45:04.501837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.530 [2024-12-15 10:45:04.501873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.530 [2024-12-15 10:45:04.501986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000020 cdw11:00000000 00:07:15.530 [2024-12-15 10:45:04.502003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.530 [2024-12-15 10:45:04.502112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.530 [2024-12-15 10:45:04.502130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.530 [2024-12-15 10:45:04.502194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:07:15.530 [2024-12-15 10:45:04.502211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.530 #29 NEW cov: 11756 ft: 14342 corp: 27/195b lim: 10 exec/s: 29 rss: 70Mb L: 10/10 MS: 1 ChangeBit- 00:07:15.789 [2024-12-15 10:45:04.551122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.789 [2024-12-15 10:45:04.551150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.789 [2024-12-15 10:45:04.551258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.789 [2024-12-15 10:45:04.551275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.789 #32 NEW cov: 11756 ft: 14346 corp: 28/200b lim: 10 exec/s: 32 rss: 70Mb L: 5/10 MS: 3 ChangeBit-ChangeBit-CMP- DE: "\000\000\000\000"- 00:07:15.789 [2024-12-15 10:45:04.591765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:15.789 [2024-12-15 10:45:04.591794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.789 [2024-12-15 10:45:04.591910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 00:07:15.789 [2024-12-15 10:45:04.591928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.789 [2024-12-15 10:45:04.592036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.789 [2024-12-15 10:45:04.592053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.789 [2024-12-15 10:45:04.592165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.789 [2024-12-15 10:45:04.592181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.789 #33 NEW cov: 11756 ft: 14347 corp: 29/209b lim: 10 exec/s: 33 rss: 70Mb L: 9/10 MS: 1 ChangeBit- 00:07:15.789 [2024-12-15 10:45:04.631228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b06d cdw11:00000000 00:07:15.789 [2024-12-15 10:45:04.631258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.789 #36 NEW cov: 11756 ft: 14357 corp: 30/211b lim: 10 exec/s: 36 rss: 70Mb L: 2/10 MS: 3 ShuffleBytes-ChangeByte-InsertByte- 00:07:15.789 [2024-12-15 10:45:04.671790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:15.789 [2024-12-15 10:45:04.671817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.789 [2024-12-15 10:45:04.671939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.789 [2024-12-15 10:45:04.671957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.789 [2024-12-15 10:45:04.672064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.789 [2024-12-15 10:45:04.672079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.789 #37 NEW cov: 11756 ft: 14396 corp: 31/218b lim: 10 exec/s: 37 rss: 70Mb L: 7/10 MS: 1 EraseBytes- 00:07:15.789 [2024-12-15 10:45:04.711850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aac cdw11:00000000 00:07:15.789 [2024-12-15 10:45:04.711878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.790 [2024-12-15 10:45:04.711991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000adad cdw11:00000000 00:07:15.790 [2024-12-15 10:45:04.712010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.790 [2024-12-15 10:45:04.712122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000adad cdw11:00000000 00:07:15.790 [2024-12-15 10:45:04.712139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.790 #38 NEW cov: 11756 ft: 14401 corp: 32/225b lim: 10 exec/s: 38 rss: 70Mb L: 7/10 MS: 1 CrossOver- 00:07:15.790 [2024-12-15 10:45:04.761652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000b5d cdw11:00000000 00:07:15.790 [2024-12-15 10:45:04.761679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.790 #39 NEW cov: 11756 ft: 14420 corp: 33/227b lim: 10 exec/s: 39 rss: 70Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:15.790 [2024-12-15 10:45:04.802640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:15.790 [2024-12-15 10:45:04.802669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.790 [2024-12-15 10:45:04.802784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00007a00 cdw11:00000000 00:07:15.790 [2024-12-15 10:45:04.802801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.049 [2024-12-15 10:45:04.802908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.802929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.049 [2024-12-15 10:45:04.803043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.803062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.049 [2024-12-15 10:45:04.803171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.803190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.049 #40 NEW cov: 11756 ft: 14494 corp: 34/237b lim: 10 exec/s: 40 rss: 70Mb L: 10/10 MS: 1 ChangeByte- 00:07:16.049 [2024-12-15 10:45:04.852084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aad cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.852111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.049 [2024-12-15 10:45:04.852226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000adad cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.852243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.049 #41 NEW cov: 11756 ft: 14500 corp: 35/241b lim: 10 exec/s: 41 rss: 70Mb L: 4/10 MS: 1 EraseBytes- 00:07:16.049 [2024-12-15 10:45:04.892761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.892788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.049 [2024-12-15 10:45:04.892904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00007a00 cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.892921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.049 [2024-12-15 10:45:04.893030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000000ad cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.893047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.049 [2024-12-15 10:45:04.893170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.893186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.049 [2024-12-15 10:45:04.893296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.893312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.049 #42 NEW cov: 11756 ft: 14536 corp: 36/251b lim: 10 exec/s: 42 rss: 70Mb L: 10/10 MS: 1 CrossOver- 00:07:16.049 [2024-12-15 10:45:04.942990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000af7 cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.943016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.049 [2024-12-15 10:45:04.943130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.943147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.049 [2024-12-15 10:45:04.943258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.943274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.049 [2024-12-15 10:45:04.943385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.943402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.049 [2024-12-15 10:45:04.943522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.943541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.049 #43 NEW cov: 11756 ft: 14537 corp: 37/261b lim: 10 exec/s: 43 rss: 70Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:16.049 [2024-12-15 10:45:04.983118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.983147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.049 [2024-12-15 10:45:04.983264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.983282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.049 [2024-12-15 10:45:04.983384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.983403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.049 [2024-12-15 10:45:04.983509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.983526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.049 [2024-12-15 10:45:04.983635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:07:16.049 [2024-12-15 10:45:04.983653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.049 #44 NEW cov: 11756 ft: 14567 corp: 38/271b lim: 10 exec/s: 44 rss: 70Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:16.049 [2024-12-15 10:45:05.022370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007e2a cdw11:00000000 00:07:16.049 [2024-12-15 10:45:05.022395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.049 #48 NEW cov: 11756 ft: 14576 corp: 39/273b lim: 10 exec/s: 48 rss: 70Mb L: 2/10 MS: 4 ChangeByte-ChangeByte-CopyPart-InsertByte- 00:07:16.308 [2024-12-15 10:45:05.063126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aad cdw11:00000000 00:07:16.308 [2024-12-15 10:45:05.063153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.308 [2024-12-15 10:45:05.063282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000adad cdw11:00000000 00:07:16.308 [2024-12-15 10:45:05.063299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.308 [2024-12-15 10:45:05.063404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002c2c cdw11:00000000 00:07:16.308 [2024-12-15 10:45:05.063426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.308 [2024-12-15 10:45:05.063558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002cad cdw11:00000000 00:07:16.308 [2024-12-15 10:45:05.063576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.308 #49 NEW cov: 11756 ft: 14579 corp: 40/281b lim: 10 exec/s: 49 rss: 70Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:07:16.308 [2024-12-15 10:45:05.102730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000af1 cdw11:00000000 00:07:16.308 [2024-12-15 10:45:05.102758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.308 [2024-12-15 10:45:05.102869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000adad cdw11:00000000 00:07:16.308 [2024-12-15 10:45:05.102885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.308 #50 NEW cov: 11756 ft: 14597 corp: 41/285b lim: 10 exec/s: 25 rss: 70Mb L: 4/10 MS: 1 ChangeByte- 00:07:16.308 #50 DONE cov: 11756 ft: 14597 corp: 41/285b lim: 10 exec/s: 25 rss: 70Mb 00:07:16.308 ###### Recommended dictionary. ###### 00:07:16.308 "\001\004\000\000\000\000\000\000" # Uses: 1 00:07:16.308 "\000\000\000\000" # Uses: 0 00:07:16.308 ###### End of recommended dictionary. ###### 00:07:16.308 Done 50 runs in 2 second(s) 00:07:16.308 10:45:05 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_7.conf 00:07:16.308 10:45:05 -- ../common.sh@72 -- # (( i++ )) 00:07:16.308 10:45:05 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.308 10:45:05 -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:16.308 10:45:05 -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:16.308 10:45:05 -- nvmf/run.sh@24 -- # local timen=1 00:07:16.308 10:45:05 -- nvmf/run.sh@25 -- # local core=0x1 00:07:16.308 10:45:05 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:16.308 10:45:05 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:16.308 10:45:05 -- nvmf/run.sh@29 -- # printf %02d 8 00:07:16.308 10:45:05 -- nvmf/run.sh@29 -- # port=4408 00:07:16.308 10:45:05 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:16.308 10:45:05 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:16.308 10:45:05 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:16.308 10:45:05 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 -r /var/tmp/spdk8.sock 00:07:16.308 [2024-12-15 10:45:05.295613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.308 [2024-12-15 10:45:05.295678] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309888 ] 00:07:16.567 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.567 [2024-12-15 10:45:05.552930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.826 [2024-12-15 10:45:05.633433] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:16.826 [2024-12-15 10:45:05.633573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.826 [2024-12-15 10:45:05.691929] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.826 [2024-12-15 10:45:05.708233] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:16.826 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.826 INFO: Seed: 3395034627 00:07:16.826 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:16.826 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:16.826 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:16.826 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.826 [2024-12-15 10:45:05.775158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.826 [2024-12-15 10:45:05.775198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.826 #2 INITED cov: 11557 ft: 11558 corp: 1/1b exec/s: 0 rss: 67Mb 00:07:16.826 [2024-12-15 10:45:05.825537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.826 [2024-12-15 10:45:05.825567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.084 #3 NEW cov: 11670 ft: 12200 corp: 2/2b lim: 5 exec/s: 0 rss: 67Mb L: 1/1 MS: 1 ChangeByte- 00:07:17.084 [2024-12-15 10:45:05.885943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.084 [2024-12-15 10:45:05.885971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.084 #4 NEW cov: 11676 ft: 12338 corp: 3/3b lim: 5 exec/s: 0 rss: 67Mb L: 1/1 MS: 1 ChangeBit- 00:07:17.084 [2024-12-15 10:45:05.947222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.084 [2024-12-15 10:45:05.947251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.084 [2024-12-15 10:45:05.947329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.084 [2024-12-15 10:45:05.947345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.084 [2024-12-15 10:45:05.947421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.084 [2024-12-15 10:45:05.947437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.084 [2024-12-15 10:45:05.947514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.084 [2024-12-15 10:45:05.947530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.084 #5 NEW cov: 11761 ft: 13384 corp: 4/7b lim: 5 exec/s: 0 rss: 67Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:17.084 [2024-12-15 10:45:05.996745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.084 [2024-12-15 10:45:05.996774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.084 [2024-12-15 10:45:05.996870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.084 [2024-12-15 10:45:05.996887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.084 #6 NEW cov: 11761 ft: 13694 corp: 5/9b lim: 5 exec/s: 0 rss: 67Mb L: 2/4 MS: 1 CopyPart- 00:07:17.084 [2024-12-15 10:45:06.047173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.084 [2024-12-15 10:45:06.047205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.084 [2024-12-15 10:45:06.047300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.084 [2024-12-15 10:45:06.047317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.084 [2024-12-15 10:45:06.047399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.084 [2024-12-15 10:45:06.047419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.084 #7 NEW cov: 11761 ft: 14036 corp: 6/12b lim: 5 exec/s: 0 rss: 67Mb L: 3/4 MS: 1 CrossOver- 00:07:17.343 [2024-12-15 10:45:06.108412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.343 [2024-12-15 10:45:06.108443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.343 [2024-12-15 10:45:06.108533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.343 [2024-12-15 10:45:06.108548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.343 [2024-12-15 10:45:06.108631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.343 [2024-12-15 10:45:06.108647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.343 [2024-12-15 10:45:06.108731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.343 [2024-12-15 10:45:06.108748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.343 [2024-12-15 10:45:06.108827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.343 [2024-12-15 10:45:06.108843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:17.343 #8 NEW cov: 11761 ft: 14122 corp: 7/17b lim: 5 exec/s: 0 rss: 67Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:17.343 [2024-12-15 10:45:06.157649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.343 [2024-12-15 10:45:06.157678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.343 [2024-12-15 10:45:06.157761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.343 [2024-12-15 10:45:06.157777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.343 #9 NEW cov: 11761 ft: 14148 corp: 8/19b lim: 5 exec/s: 0 rss: 67Mb L: 2/5 MS: 1 InsertByte- 00:07:17.343 [2024-12-15 10:45:06.209018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.343 [2024-12-15 10:45:06.209045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.343 [2024-12-15 10:45:06.209139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.343 [2024-12-15 10:45:06.209156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.343 [2024-12-15 10:45:06.209227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.343 [2024-12-15 10:45:06.209242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.343 [2024-12-15 10:45:06.209317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.343 [2024-12-15 10:45:06.209333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.343 [2024-12-15 10:45:06.209407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.343 [2024-12-15 10:45:06.209427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:17.343 #10 NEW cov: 11761 ft: 14253 corp: 9/24b lim: 5 exec/s: 0 rss: 67Mb L: 5/5 MS: 1 ChangeBinInt- 00:07:17.343 [2024-12-15 10:45:06.268046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.343 [2024-12-15 10:45:06.268073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.343 #11 NEW cov: 11761 ft: 14288 corp: 10/25b lim: 5 exec/s: 0 rss: 67Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:17.343 [2024-12-15 10:45:06.319155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.343 [2024-12-15 10:45:06.319181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.343 [2024-12-15 10:45:06.319266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.343 [2024-12-15 10:45:06.319283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.343 [2024-12-15 10:45:06.319364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.343 [2024-12-15 10:45:06.319379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.343 #12 NEW cov: 11761 ft: 14344 corp: 11/28b lim: 5 exec/s: 0 rss: 67Mb L: 3/5 MS: 1 CMP- DE: "\013\000"- 00:07:17.602 [2024-12-15 10:45:06.379403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.602 [2024-12-15 10:45:06.379434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.602 [2024-12-15 10:45:06.379516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.602 [2024-12-15 10:45:06.379533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.602 [2024-12-15 10:45:06.379625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.602 [2024-12-15 10:45:06.379640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.602 #13 NEW cov: 11761 ft: 14384 corp: 12/31b lim: 5 exec/s: 0 rss: 67Mb L: 3/5 MS: 1 ChangeByte- 00:07:17.602 [2024-12-15 10:45:06.430510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.602 [2024-12-15 10:45:06.430536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.602 [2024-12-15 10:45:06.430631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.602 [2024-12-15 10:45:06.430647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.602 [2024-12-15 10:45:06.430722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.602 [2024-12-15 10:45:06.430736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.603 [2024-12-15 10:45:06.430809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.603 [2024-12-15 10:45:06.430824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.603 [2024-12-15 10:45:06.430891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.603 [2024-12-15 10:45:06.430905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:17.603 #14 NEW cov: 11761 ft: 14437 corp: 13/36b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 CrossOver- 00:07:17.603 [2024-12-15 10:45:06.489488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.603 [2024-12-15 10:45:06.489514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.603 #15 NEW cov: 11761 ft: 14490 corp: 14/37b lim: 5 exec/s: 0 rss: 68Mb L: 1/5 MS: 1 CrossOver- 00:07:17.603 [2024-12-15 10:45:06.550015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.603 [2024-12-15 10:45:06.550043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.603 #16 NEW cov: 11761 ft: 14573 corp: 15/38b lim: 5 exec/s: 0 rss: 68Mb L: 1/5 MS: 1 ChangeByte- 00:07:17.603 [2024-12-15 10:45:06.611679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.603 [2024-12-15 10:45:06.611707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.603 [2024-12-15 10:45:06.611780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.603 [2024-12-15 10:45:06.611796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.603 [2024-12-15 10:45:06.611872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.603 [2024-12-15 10:45:06.611887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.603 [2024-12-15 10:45:06.611971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.603 [2024-12-15 10:45:06.611991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.603 [2024-12-15 10:45:06.612062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.603 [2024-12-15 10:45:06.612078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:18.122 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:18.122 #17 NEW cov: 11784 ft: 14623 corp: 16/43b lim: 5 exec/s: 17 rss: 69Mb L: 5/5 MS: 1 ChangeBit- 00:07:18.122 [2024-12-15 10:45:06.930739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.122 [2024-12-15 10:45:06.930775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.122 [2024-12-15 10:45:06.930899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.122 [2024-12-15 10:45:06.930916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.122 [2024-12-15 10:45:06.931041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.122 [2024-12-15 10:45:06.931057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.122 #18 NEW cov: 11784 ft: 14782 corp: 17/46b lim: 5 exec/s: 18 rss: 69Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:18.122 [2024-12-15 10:45:06.981054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.122 [2024-12-15 10:45:06.981084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.122 [2024-12-15 10:45:06.981205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.122 [2024-12-15 10:45:06.981222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.122 [2024-12-15 10:45:06.981338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.122 [2024-12-15 10:45:06.981356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.122 [2024-12-15 10:45:06.981482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.122 [2024-12-15 10:45:06.981498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.122 #19 NEW cov: 11784 ft: 14825 corp: 18/50b lim: 5 exec/s: 19 rss: 69Mb L: 4/5 MS: 1 PersAutoDict- DE: "\013\000"- 00:07:18.122 [2024-12-15 10:45:07.030675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.122 [2024-12-15 10:45:07.030703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.122 [2024-12-15 10:45:07.030819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.122 [2024-12-15 10:45:07.030836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.122 #20 NEW cov: 11784 ft: 14842 corp: 19/52b lim: 5 exec/s: 20 rss: 69Mb L: 2/5 MS: 1 ChangeByte- 00:07:18.122 [2024-12-15 10:45:07.070506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.122 [2024-12-15 10:45:07.070534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.122 #21 NEW cov: 11784 ft: 14931 corp: 20/53b lim: 5 exec/s: 21 rss: 69Mb L: 1/5 MS: 1 ChangeByte- 00:07:18.122 [2024-12-15 10:45:07.111346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.122 [2024-12-15 10:45:07.111373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.122 [2024-12-15 10:45:07.111519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.122 [2024-12-15 10:45:07.111536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.122 [2024-12-15 10:45:07.111657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.122 [2024-12-15 10:45:07.111674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.122 [2024-12-15 10:45:07.111768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.122 [2024-12-15 10:45:07.111786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.382 #22 NEW cov: 11784 ft: 14937 corp: 21/57b lim: 5 exec/s: 22 rss: 69Mb L: 4/5 MS: 1 ShuffleBytes- 00:07:18.382 [2024-12-15 10:45:07.161554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.382 [2024-12-15 10:45:07.161583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.382 [2024-12-15 10:45:07.161706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.382 [2024-12-15 10:45:07.161722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.382 [2024-12-15 10:45:07.161856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.382 [2024-12-15 10:45:07.161873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.382 [2024-12-15 10:45:07.161993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.382 [2024-12-15 10:45:07.162009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.382 #23 NEW cov: 11784 ft: 14963 corp: 22/61b lim: 5 exec/s: 23 rss: 70Mb L: 4/5 MS: 1 CrossOver- 00:07:18.382 [2024-12-15 10:45:07.210942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.382 [2024-12-15 10:45:07.210968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.382 #24 NEW cov: 11784 ft: 15007 corp: 23/62b lim: 5 exec/s: 24 rss: 70Mb L: 1/5 MS: 1 CrossOver- 00:07:18.382 [2024-12-15 10:45:07.251851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.382 [2024-12-15 10:45:07.251878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.382 [2024-12-15 10:45:07.251996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.382 [2024-12-15 10:45:07.252015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.382 [2024-12-15 10:45:07.252136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.382 [2024-12-15 10:45:07.252153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.382 [2024-12-15 10:45:07.252267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.382 [2024-12-15 10:45:07.252282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.382 #25 NEW cov: 11784 ft: 15036 corp: 24/66b lim: 5 exec/s: 25 rss: 70Mb L: 4/5 MS: 1 ChangeBit- 00:07:18.382 [2024-12-15 10:45:07.291917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.382 [2024-12-15 10:45:07.291942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.382 [2024-12-15 10:45:07.292063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.382 [2024-12-15 10:45:07.292081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.382 [2024-12-15 10:45:07.292197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.382 [2024-12-15 10:45:07.292213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.382 [2024-12-15 10:45:07.292333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.382 [2024-12-15 10:45:07.292351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.382 #26 NEW cov: 11784 ft: 15048 corp: 25/70b lim: 5 exec/s: 26 rss: 70Mb L: 4/5 MS: 1 EraseBytes- 00:07:18.382 [2024-12-15 10:45:07.331571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.382 [2024-12-15 10:45:07.331597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.382 [2024-12-15 10:45:07.331715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.382 [2024-12-15 10:45:07.331732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.382 #27 NEW cov: 11784 ft: 15067 corp: 26/72b lim: 5 exec/s: 27 rss: 70Mb L: 2/5 MS: 1 InsertByte- 00:07:18.382 [2024-12-15 10:45:07.371708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.382 [2024-12-15 10:45:07.371736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.382 [2024-12-15 10:45:07.371853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.382 [2024-12-15 10:45:07.371871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.641 #28 NEW cov: 11784 ft: 15088 corp: 27/74b lim: 5 exec/s: 28 rss: 70Mb L: 2/5 MS: 1 ChangeByte- 00:07:18.641 [2024-12-15 10:45:07.412314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.641 [2024-12-15 10:45:07.412339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.641 [2024-12-15 10:45:07.412463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.641 [2024-12-15 10:45:07.412481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.641 [2024-12-15 10:45:07.412601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.641 [2024-12-15 10:45:07.412617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.641 [2024-12-15 10:45:07.412732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.641 [2024-12-15 10:45:07.412748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.641 #29 NEW cov: 11784 ft: 15170 corp: 28/78b lim: 5 exec/s: 29 rss: 70Mb L: 4/5 MS: 1 PersAutoDict- DE: "\013\000"- 00:07:18.641 [2024-12-15 10:45:07.462672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.641 [2024-12-15 10:45:07.462698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.641 [2024-12-15 10:45:07.462821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.641 [2024-12-15 10:45:07.462855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.641 [2024-12-15 10:45:07.462977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.641 [2024-12-15 10:45:07.462993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.641 [2024-12-15 10:45:07.463114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.641 [2024-12-15 10:45:07.463130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.641 [2024-12-15 10:45:07.463249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.641 [2024-12-15 10:45:07.463265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:18.641 #30 NEW cov: 11784 ft: 15186 corp: 29/83b lim: 5 exec/s: 30 rss: 70Mb L: 5/5 MS: 1 ChangeBit- 00:07:18.641 [2024-12-15 10:45:07.502491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.641 [2024-12-15 10:45:07.502521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.642 [2024-12-15 10:45:07.502628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.642 [2024-12-15 10:45:07.502645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.642 [2024-12-15 10:45:07.502761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.642 [2024-12-15 10:45:07.502776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.642 [2024-12-15 10:45:07.502895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.642 [2024-12-15 10:45:07.502912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.642 #31 NEW cov: 11784 ft: 15203 corp: 30/87b lim: 5 exec/s: 31 rss: 70Mb L: 4/5 MS: 1 PersAutoDict- DE: "\013\000"- 00:07:18.642 [2024-12-15 10:45:07.542636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.642 [2024-12-15 10:45:07.542662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.642 [2024-12-15 10:45:07.542780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.642 [2024-12-15 10:45:07.542797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.642 [2024-12-15 10:45:07.542912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.642 [2024-12-15 10:45:07.542926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.642 [2024-12-15 10:45:07.543040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.642 [2024-12-15 10:45:07.543056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.642 #32 NEW cov: 11784 ft: 15218 corp: 31/91b lim: 5 exec/s: 32 rss: 70Mb L: 4/5 MS: 1 ChangeByte- 00:07:18.642 [2024-12-15 10:45:07.592255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.642 [2024-12-15 10:45:07.592281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.642 [2024-12-15 10:45:07.592391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.642 [2024-12-15 10:45:07.592408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.642 #33 NEW cov: 11784 ft: 15258 corp: 32/93b lim: 5 exec/s: 33 rss: 70Mb L: 2/5 MS: 1 CrossOver- 00:07:18.642 [2024-12-15 10:45:07.633290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.642 [2024-12-15 10:45:07.633317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.642 [2024-12-15 10:45:07.633438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.642 [2024-12-15 10:45:07.633457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.642 [2024-12-15 10:45:07.633572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.642 [2024-12-15 10:45:07.633588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.642 [2024-12-15 10:45:07.633701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.642 [2024-12-15 10:45:07.633717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.642 [2024-12-15 10:45:07.633834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.642 [2024-12-15 10:45:07.633849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:18.642 #34 NEW cov: 11784 ft: 15289 corp: 33/98b lim: 5 exec/s: 34 rss: 70Mb L: 5/5 MS: 1 CrossOver- 00:07:18.902 [2024-12-15 10:45:07.672308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.902 [2024-12-15 10:45:07.672335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.902 #35 NEW cov: 11784 ft: 15299 corp: 34/99b lim: 5 exec/s: 35 rss: 70Mb L: 1/5 MS: 1 ChangeBinInt- 00:07:18.902 [2024-12-15 10:45:07.712752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.902 [2024-12-15 10:45:07.712778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.902 [2024-12-15 10:45:07.712898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.902 [2024-12-15 10:45:07.712932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.902 #36 NEW cov: 11784 ft: 15312 corp: 35/101b lim: 5 exec/s: 36 rss: 70Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:18.902 [2024-12-15 10:45:07.753668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.902 [2024-12-15 10:45:07.753693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.902 [2024-12-15 10:45:07.753813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.902 [2024-12-15 10:45:07.753830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.902 [2024-12-15 10:45:07.753965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.902 [2024-12-15 10:45:07.753981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.902 [2024-12-15 10:45:07.754102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.902 [2024-12-15 10:45:07.754122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.902 [2024-12-15 10:45:07.754240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.902 [2024-12-15 10:45:07.754258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:18.902 #37 NEW cov: 11784 ft: 15329 corp: 36/106b lim: 5 exec/s: 18 rss: 70Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:18.902 #37 DONE cov: 11784 ft: 15329 corp: 36/106b lim: 5 exec/s: 18 rss: 70Mb 00:07:18.902 ###### Recommended dictionary. ###### 00:07:18.902 "\013\000" # Uses: 3 00:07:18.902 ###### End of recommended dictionary. ###### 00:07:18.902 Done 37 runs in 2 second(s) 00:07:18.902 10:45:07 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_8.conf 00:07:18.902 10:45:07 -- ../common.sh@72 -- # (( i++ )) 00:07:18.902 10:45:07 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:18.902 10:45:07 -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:18.902 10:45:07 -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:18.902 10:45:07 -- nvmf/run.sh@24 -- # local timen=1 00:07:18.902 10:45:07 -- nvmf/run.sh@25 -- # local core=0x1 00:07:18.902 10:45:07 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:18.902 10:45:07 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:18.902 10:45:07 -- nvmf/run.sh@29 -- # printf %02d 9 00:07:18.902 10:45:07 -- nvmf/run.sh@29 -- # port=4409 00:07:18.902 10:45:07 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:18.902 10:45:07 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:18.902 10:45:07 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:19.162 10:45:07 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 -r /var/tmp/spdk9.sock 00:07:19.162 [2024-12-15 10:45:07.945078] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.162 [2024-12-15 10:45:07.945140] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1310595 ] 00:07:19.162 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.420 [2024-12-15 10:45:08.202987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.420 [2024-12-15 10:45:08.287302] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:19.420 [2024-12-15 10:45:08.287428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.420 [2024-12-15 10:45:08.345445] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.420 [2024-12-15 10:45:08.361766] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:19.420 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.420 INFO: Seed: 1754058477 00:07:19.420 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:19.420 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:19.420 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:19.420 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.678 [2024-12-15 10:45:08.438384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.679 [2024-12-15 10:45:08.438427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.679 #2 INITED cov: 11552 ft: 11551 corp: 1/1b exec/s: 0 rss: 66Mb 00:07:19.679 [2024-12-15 10:45:08.488853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.679 [2024-12-15 10:45:08.488884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.679 [2024-12-15 10:45:08.488952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.679 [2024-12-15 10:45:08.488968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.679 #3 NEW cov: 11670 ft: 12769 corp: 2/3b lim: 5 exec/s: 0 rss: 67Mb L: 2/2 MS: 1 InsertByte- 00:07:19.679 [2024-12-15 10:45:08.548964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.679 [2024-12-15 10:45:08.548991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.679 [2024-12-15 10:45:08.549060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.679 [2024-12-15 10:45:08.549075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.679 #4 NEW cov: 11676 ft: 12911 corp: 3/5b lim: 5 exec/s: 0 rss: 67Mb L: 2/2 MS: 1 ShuffleBytes- 00:07:19.679 [2024-12-15 10:45:08.598806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.679 [2024-12-15 10:45:08.598833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.679 #5 NEW cov: 11761 ft: 13106 corp: 4/6b lim: 5 exec/s: 0 rss: 67Mb L: 1/2 MS: 1 EraseBytes- 00:07:19.679 [2024-12-15 10:45:08.649078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.679 [2024-12-15 10:45:08.649106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.679 #6 NEW cov: 11761 ft: 13303 corp: 5/7b lim: 5 exec/s: 0 rss: 67Mb L: 1/2 MS: 1 ChangeByte- 00:07:19.938 [2024-12-15 10:45:08.699251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.938 [2024-12-15 10:45:08.699278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.938 #7 NEW cov: 11761 ft: 13375 corp: 6/8b lim: 5 exec/s: 0 rss: 67Mb L: 1/2 MS: 1 EraseBytes- 00:07:19.938 [2024-12-15 10:45:08.739228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.938 [2024-12-15 10:45:08.739254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.938 [2024-12-15 10:45:08.739318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.938 [2024-12-15 10:45:08.739333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.938 [2024-12-15 10:45:08.739398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.938 [2024-12-15 10:45:08.739418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.938 #8 NEW cov: 11761 ft: 13650 corp: 7/11b lim: 5 exec/s: 0 rss: 67Mb L: 3/3 MS: 1 CrossOver- 00:07:19.938 [2024-12-15 10:45:08.789848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.938 [2024-12-15 10:45:08.789879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.938 [2024-12-15 10:45:08.789947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.938 [2024-12-15 10:45:08.789962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.938 #9 NEW cov: 11761 ft: 13766 corp: 8/13b lim: 5 exec/s: 0 rss: 67Mb L: 2/3 MS: 1 CrossOver- 00:07:19.938 [2024-12-15 10:45:08.840072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.938 [2024-12-15 10:45:08.840099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.938 [2024-12-15 10:45:08.840168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.938 [2024-12-15 10:45:08.840183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.938 #10 NEW cov: 11761 ft: 13853 corp: 9/15b lim: 5 exec/s: 0 rss: 67Mb L: 2/3 MS: 1 ShuffleBytes- 00:07:19.938 [2024-12-15 10:45:08.891307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.938 [2024-12-15 10:45:08.891333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.939 [2024-12-15 10:45:08.891406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.939 [2024-12-15 10:45:08.891425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.939 [2024-12-15 10:45:08.891490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.939 [2024-12-15 10:45:08.891505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.939 [2024-12-15 10:45:08.891575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.939 [2024-12-15 10:45:08.891590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.939 [2024-12-15 10:45:08.891658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.939 [2024-12-15 10:45:08.891673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:19.939 #11 NEW cov: 11761 ft: 14211 corp: 10/20b lim: 5 exec/s: 0 rss: 67Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:19.939 [2024-12-15 10:45:08.950895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.939 [2024-12-15 10:45:08.950921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.939 [2024-12-15 10:45:08.950990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.939 [2024-12-15 10:45:08.951004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.939 [2024-12-15 10:45:08.951086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.939 [2024-12-15 10:45:08.951103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.198 #12 NEW cov: 11761 ft: 14260 corp: 11/23b lim: 5 exec/s: 0 rss: 67Mb L: 3/5 MS: 1 InsertByte- 00:07:20.198 [2024-12-15 10:45:09.010300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.198 [2024-12-15 10:45:09.010326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.198 #13 NEW cov: 11761 ft: 14325 corp: 12/24b lim: 5 exec/s: 0 rss: 67Mb L: 1/5 MS: 1 CrossOver- 00:07:20.198 [2024-12-15 10:45:09.061238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.198 [2024-12-15 10:45:09.061265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.198 [2024-12-15 10:45:09.061338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.198 [2024-12-15 10:45:09.061352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.198 [2024-12-15 10:45:09.061425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.198 [2024-12-15 10:45:09.061440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.198 #14 NEW cov: 11761 ft: 14337 corp: 13/27b lim: 5 exec/s: 0 rss: 67Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:20.198 [2024-12-15 10:45:09.121141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.198 [2024-12-15 10:45:09.121166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.198 [2024-12-15 10:45:09.121235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.198 [2024-12-15 10:45:09.121249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.198 #15 NEW cov: 11761 ft: 14356 corp: 14/29b lim: 5 exec/s: 0 rss: 67Mb L: 2/5 MS: 1 InsertByte- 00:07:20.198 [2024-12-15 10:45:09.172057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.198 [2024-12-15 10:45:09.172083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.198 [2024-12-15 10:45:09.172154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.198 [2024-12-15 10:45:09.172169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.198 [2024-12-15 10:45:09.172234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.198 [2024-12-15 10:45:09.172248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.198 [2024-12-15 10:45:09.172314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.198 [2024-12-15 10:45:09.172328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.198 #16 NEW cov: 11761 ft: 14436 corp: 15/33b lim: 5 exec/s: 0 rss: 68Mb L: 4/5 MS: 1 InsertByte- 00:07:20.457 [2024-12-15 10:45:09.221568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.457 [2024-12-15 10:45:09.221595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.457 [2024-12-15 10:45:09.221668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.457 [2024-12-15 10:45:09.221681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.457 #17 NEW cov: 11761 ft: 14443 corp: 16/35b lim: 5 exec/s: 0 rss: 68Mb L: 2/5 MS: 1 InsertByte- 00:07:20.457 [2024-12-15 10:45:09.271709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.457 [2024-12-15 10:45:09.271735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.458 [2024-12-15 10:45:09.271803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.458 [2024-12-15 10:45:09.271818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.716 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:20.716 #18 NEW cov: 11784 ft: 14499 corp: 17/37b lim: 5 exec/s: 18 rss: 69Mb L: 2/5 MS: 1 ChangeByte- 00:07:20.716 [2024-12-15 10:45:09.581999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.716 [2024-12-15 10:45:09.582043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.716 [2024-12-15 10:45:09.582177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.716 [2024-12-15 10:45:09.582198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.716 #19 NEW cov: 11784 ft: 14711 corp: 18/39b lim: 5 exec/s: 19 rss: 69Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:20.716 [2024-12-15 10:45:09.631767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.716 [2024-12-15 10:45:09.631796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.716 #20 NEW cov: 11784 ft: 14756 corp: 19/40b lim: 5 exec/s: 20 rss: 69Mb L: 1/5 MS: 1 EraseBytes- 00:07:20.716 [2024-12-15 10:45:09.672214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.716 [2024-12-15 10:45:09.672242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.717 [2024-12-15 10:45:09.672355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.717 [2024-12-15 10:45:09.672372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.717 #21 NEW cov: 11784 ft: 14813 corp: 20/42b lim: 5 exec/s: 21 rss: 69Mb L: 2/5 MS: 1 CopyPart- 00:07:20.717 [2024-12-15 10:45:09.712032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.717 [2024-12-15 10:45:09.712063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.975 #22 NEW cov: 11784 ft: 14866 corp: 21/43b lim: 5 exec/s: 22 rss: 69Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:20.975 [2024-12-15 10:45:09.752602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.975 [2024-12-15 10:45:09.752629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.975 [2024-12-15 10:45:09.752747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.975 [2024-12-15 10:45:09.752763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.975 [2024-12-15 10:45:09.752873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.975 [2024-12-15 10:45:09.752890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.976 #23 NEW cov: 11784 ft: 14872 corp: 22/46b lim: 5 exec/s: 23 rss: 69Mb L: 3/5 MS: 1 InsertByte- 00:07:20.976 [2024-12-15 10:45:09.802767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.976 [2024-12-15 10:45:09.802795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.976 [2024-12-15 10:45:09.802909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.976 [2024-12-15 10:45:09.802926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.976 [2024-12-15 10:45:09.803040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.976 [2024-12-15 10:45:09.803058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.976 #24 NEW cov: 11784 ft: 14911 corp: 23/49b lim: 5 exec/s: 24 rss: 69Mb L: 3/5 MS: 1 ChangeBit- 00:07:20.976 [2024-12-15 10:45:09.842630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.976 [2024-12-15 10:45:09.842656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.976 [2024-12-15 10:45:09.842780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.976 [2024-12-15 10:45:09.842797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.976 #25 NEW cov: 11784 ft: 14997 corp: 24/51b lim: 5 exec/s: 25 rss: 69Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:20.976 [2024-12-15 10:45:09.892512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.976 [2024-12-15 10:45:09.892542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.976 #26 NEW cov: 11784 ft: 15118 corp: 25/52b lim: 5 exec/s: 26 rss: 69Mb L: 1/5 MS: 1 EraseBytes- 00:07:20.976 [2024-12-15 10:45:09.942682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.976 [2024-12-15 10:45:09.942711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.976 #27 NEW cov: 11784 ft: 15172 corp: 26/53b lim: 5 exec/s: 27 rss: 70Mb L: 1/5 MS: 1 EraseBytes- 00:07:20.976 [2024-12-15 10:45:09.983904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.976 [2024-12-15 10:45:09.983933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.976 [2024-12-15 10:45:09.984058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.976 [2024-12-15 10:45:09.984074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.976 [2024-12-15 10:45:09.984191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.976 [2024-12-15 10:45:09.984209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.976 [2024-12-15 10:45:09.984330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.976 [2024-12-15 10:45:09.984347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.976 [2024-12-15 10:45:09.984471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.976 [2024-12-15 10:45:09.984488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:21.235 #28 NEW cov: 11784 ft: 15197 corp: 27/58b lim: 5 exec/s: 28 rss: 70Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:21.235 [2024-12-15 10:45:10.023478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.235 [2024-12-15 10:45:10.023506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.235 [2024-12-15 10:45:10.023620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.235 [2024-12-15 10:45:10.023637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.235 [2024-12-15 10:45:10.023761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.235 [2024-12-15 10:45:10.023779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.235 #29 NEW cov: 11784 ft: 15224 corp: 28/61b lim: 5 exec/s: 29 rss: 70Mb L: 3/5 MS: 1 ChangeByte- 00:07:21.235 [2024-12-15 10:45:10.063369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.235 [2024-12-15 10:45:10.063398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.235 [2024-12-15 10:45:10.063518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.235 [2024-12-15 10:45:10.063536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.235 #30 NEW cov: 11784 ft: 15231 corp: 29/63b lim: 5 exec/s: 30 rss: 70Mb L: 2/5 MS: 1 InsertByte- 00:07:21.235 [2024-12-15 10:45:10.103773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.235 [2024-12-15 10:45:10.103804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.235 [2024-12-15 10:45:10.103926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.235 [2024-12-15 10:45:10.103942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.235 [2024-12-15 10:45:10.104060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.235 [2024-12-15 10:45:10.104077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.235 #31 NEW cov: 11784 ft: 15247 corp: 30/66b lim: 5 exec/s: 31 rss: 70Mb L: 3/5 MS: 1 CopyPart- 00:07:21.235 [2024-12-15 10:45:10.154340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.235 [2024-12-15 10:45:10.154368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.235 [2024-12-15 10:45:10.154428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.235 [2024-12-15 10:45:10.154439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.235 [2024-12-15 10:45:10.154457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.235 [2024-12-15 10:45:10.154467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.235 [2024-12-15 10:45:10.154484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.235 [2024-12-15 10:45:10.154495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.236 #32 NEW cov: 11793 ft: 15273 corp: 31/70b lim: 5 exec/s: 32 rss: 70Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:07:21.236 [2024-12-15 10:45:10.194247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.236 [2024-12-15 10:45:10.194274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.236 [2024-12-15 10:45:10.194397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.236 [2024-12-15 10:45:10.194419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.236 [2024-12-15 10:45:10.194543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.236 [2024-12-15 10:45:10.194558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.236 [2024-12-15 10:45:10.194682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.236 [2024-12-15 10:45:10.194699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.236 #33 NEW cov: 11793 ft: 15287 corp: 32/74b lim: 5 exec/s: 33 rss: 70Mb L: 4/5 MS: 1 EraseBytes- 00:07:21.236 [2024-12-15 10:45:10.244420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.236 [2024-12-15 10:45:10.244447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.236 [2024-12-15 10:45:10.244567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.236 [2024-12-15 10:45:10.244583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.236 [2024-12-15 10:45:10.244707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.236 [2024-12-15 10:45:10.244724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.236 [2024-12-15 10:45:10.244848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.236 [2024-12-15 10:45:10.244863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.494 #34 NEW cov: 11793 ft: 15295 corp: 33/78b lim: 5 exec/s: 34 rss: 70Mb L: 4/5 MS: 1 InsertByte- 00:07:21.495 [2024-12-15 10:45:10.294610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.495 [2024-12-15 10:45:10.294638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.495 [2024-12-15 10:45:10.294758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.495 [2024-12-15 10:45:10.294776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.495 [2024-12-15 10:45:10.294894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.495 [2024-12-15 10:45:10.294911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.495 [2024-12-15 10:45:10.295032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.495 [2024-12-15 10:45:10.295048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.495 #35 NEW cov: 11793 ft: 15334 corp: 34/82b lim: 5 exec/s: 35 rss: 70Mb L: 4/5 MS: 1 InsertByte- 00:07:21.495 [2024-12-15 10:45:10.334196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.495 [2024-12-15 10:45:10.334223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.495 [2024-12-15 10:45:10.334340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.495 [2024-12-15 10:45:10.334358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.495 #36 NEW cov: 11793 ft: 15344 corp: 35/84b lim: 5 exec/s: 36 rss: 70Mb L: 2/5 MS: 1 ChangeBinInt- 00:07:21.495 [2024-12-15 10:45:10.374235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.495 [2024-12-15 10:45:10.374264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.495 [2024-12-15 10:45:10.374385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.495 [2024-12-15 10:45:10.374402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.495 #37 NEW cov: 11793 ft: 15351 corp: 36/86b lim: 5 exec/s: 37 rss: 70Mb L: 2/5 MS: 1 ChangeBinInt- 00:07:21.495 [2024-12-15 10:45:10.414379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.495 [2024-12-15 10:45:10.414407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.495 [2024-12-15 10:45:10.414535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.495 [2024-12-15 10:45:10.414555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.495 #38 NEW cov: 11793 ft: 15362 corp: 37/88b lim: 5 exec/s: 19 rss: 70Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:21.495 #38 DONE cov: 11793 ft: 15362 corp: 37/88b lim: 5 exec/s: 19 rss: 70Mb 00:07:21.495 Done 38 runs in 2 second(s) 00:07:21.754 10:45:10 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_9.conf 00:07:21.754 10:45:10 -- ../common.sh@72 -- # (( i++ )) 00:07:21.754 10:45:10 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.754 10:45:10 -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:21.754 10:45:10 -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:21.754 10:45:10 -- nvmf/run.sh@24 -- # local timen=1 00:07:21.754 10:45:10 -- nvmf/run.sh@25 -- # local core=0x1 00:07:21.754 10:45:10 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:21.754 10:45:10 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:21.754 10:45:10 -- nvmf/run.sh@29 -- # printf %02d 10 00:07:21.754 10:45:10 -- nvmf/run.sh@29 -- # port=4410 00:07:21.754 10:45:10 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:21.754 10:45:10 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:21.754 10:45:10 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:21.754 10:45:10 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 -r /var/tmp/spdk10.sock 00:07:21.754 [2024-12-15 10:45:10.605306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.754 [2024-12-15 10:45:10.605367] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1311138 ] 00:07:21.754 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.013 [2024-12-15 10:45:10.859243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.013 [2024-12-15 10:45:10.942091] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.013 [2024-12-15 10:45:10.942209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.013 [2024-12-15 10:45:10.999875] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.013 [2024-12-15 10:45:11.016179] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:22.272 INFO: Running with entropic power schedule (0xFF, 100). 00:07:22.272 INFO: Seed: 111091387 00:07:22.272 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:22.272 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:22.272 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:22.272 INFO: A corpus is not provided, starting from an empty corpus 00:07:22.272 #2 INITED exec/s: 0 rss: 60Mb 00:07:22.272 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:22.272 This may also happen if the target rejected all inputs we tried so far 00:07:22.272 [2024-12-15 10:45:11.065165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.272 [2024-12-15 10:45:11.065193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.272 [2024-12-15 10:45:11.065249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.272 [2024-12-15 10:45:11.065263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.272 [2024-12-15 10:45:11.065318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.272 [2024-12-15 10:45:11.065331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.272 [2024-12-15 10:45:11.065384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.272 [2024-12-15 10:45:11.065397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.531 NEW_FUNC[1/670]: 0x447688 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:22.531 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:22.531 #3 NEW cov: 11580 ft: 11572 corp: 2/34b lim: 40 exec/s: 0 rss: 68Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:22.531 [2024-12-15 10:45:11.365685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.531 [2024-12-15 10:45:11.365717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.531 [2024-12-15 10:45:11.365776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.531 [2024-12-15 10:45:11.365789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.531 #8 NEW cov: 11693 ft: 12530 corp: 3/54b lim: 40 exec/s: 0 rss: 69Mb L: 20/33 MS: 5 ChangeBit-InsertByte-ShuffleBytes-EraseBytes-InsertRepeatedBytes- 00:07:22.531 [2024-12-15 10:45:11.405957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.531 [2024-12-15 10:45:11.405982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.531 [2024-12-15 10:45:11.406037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.531 [2024-12-15 10:45:11.406051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.531 [2024-12-15 10:45:11.406107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fbffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.531 [2024-12-15 10:45:11.406120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.531 [2024-12-15 10:45:11.406176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.531 [2024-12-15 10:45:11.406189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.531 #9 NEW cov: 11699 ft: 12844 corp: 4/87b lim: 40 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 ChangeBit- 00:07:22.531 [2024-12-15 10:45:11.445811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.531 [2024-12-15 10:45:11.445839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.531 [2024-12-15 10:45:11.445896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.531 [2024-12-15 10:45:11.445911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.531 #10 NEW cov: 11784 ft: 13053 corp: 5/107b lim: 40 exec/s: 0 rss: 69Mb L: 20/33 MS: 1 ChangeBinInt- 00:07:22.531 [2024-12-15 10:45:11.485929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.531 [2024-12-15 10:45:11.485954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.531 [2024-12-15 10:45:11.486013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.531 [2024-12-15 10:45:11.486027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.531 #11 NEW cov: 11784 ft: 13183 corp: 6/127b lim: 40 exec/s: 0 rss: 69Mb L: 20/33 MS: 1 CrossOver- 00:07:22.531 [2024-12-15 10:45:11.526285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.531 [2024-12-15 10:45:11.526309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.531 [2024-12-15 10:45:11.526380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.531 [2024-12-15 10:45:11.526393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.531 [2024-12-15 10:45:11.526452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.531 [2024-12-15 10:45:11.526466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.531 [2024-12-15 10:45:11.526533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.531 [2024-12-15 10:45:11.526546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.790 #12 NEW cov: 11784 ft: 13345 corp: 7/162b lim: 40 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:22.791 [2024-12-15 10:45:11.566131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.566155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.791 [2024-12-15 10:45:11.566214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.566231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.791 #13 NEW cov: 11784 ft: 13460 corp: 8/181b lim: 40 exec/s: 0 rss: 69Mb L: 19/35 MS: 1 EraseBytes- 00:07:22.791 [2024-12-15 10:45:11.606496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffe5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.606520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.791 [2024-12-15 10:45:11.606578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e5e5e5ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.606591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.791 [2024-12-15 10:45:11.606646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.606659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.791 [2024-12-15 10:45:11.606716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.606729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.791 #14 NEW cov: 11784 ft: 13503 corp: 9/220b lim: 40 exec/s: 0 rss: 69Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:22.791 [2024-12-15 10:45:11.646621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.646645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.791 [2024-12-15 10:45:11.646703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.646716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.791 [2024-12-15 10:45:11.646770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.646783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.791 [2024-12-15 10:45:11.646838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.646851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.791 #15 NEW cov: 11784 ft: 13546 corp: 10/253b lim: 40 exec/s: 0 rss: 69Mb L: 33/39 MS: 1 ShuffleBytes- 00:07:22.791 [2024-12-15 10:45:11.686757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.686780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.791 [2024-12-15 10:45:11.686852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.686865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.791 [2024-12-15 10:45:11.686921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.686937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.791 [2024-12-15 10:45:11.686993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:90909090 cdw11:90000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.687006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.791 #16 NEW cov: 11784 ft: 13623 corp: 11/288b lim: 40 exec/s: 0 rss: 69Mb L: 35/39 MS: 1 InsertRepeatedBytes- 00:07:22.791 [2024-12-15 10:45:11.726854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.726878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.791 [2024-12-15 10:45:11.726951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.726964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.791 [2024-12-15 10:45:11.727021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.727034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.791 [2024-12-15 10:45:11.727090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.727104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.791 #17 NEW cov: 11784 ft: 13650 corp: 12/323b lim: 40 exec/s: 0 rss: 69Mb L: 35/39 MS: 1 ChangeByte- 00:07:22.791 [2024-12-15 10:45:11.766680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.766704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.791 [2024-12-15 10:45:11.766760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03000000 cdw11:000a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.791 [2024-12-15 10:45:11.766774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.791 #18 NEW cov: 11784 ft: 13747 corp: 13/344b lim: 40 exec/s: 0 rss: 69Mb L: 21/39 MS: 1 CrossOver- 00:07:23.050 [2024-12-15 10:45:11.807107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.050 [2024-12-15 10:45:11.807131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.050 [2024-12-15 10:45:11.807190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.050 [2024-12-15 10:45:11.807204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.050 [2024-12-15 10:45:11.807260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00100040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.807273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.051 [2024-12-15 10:45:11.807339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.807353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.051 #19 NEW cov: 11784 ft: 13794 corp: 14/379b lim: 40 exec/s: 0 rss: 69Mb L: 35/39 MS: 1 ChangeBit- 00:07:23.051 [2024-12-15 10:45:11.847203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0013d556 cdw11:e1000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.847227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.051 [2024-12-15 10:45:11.847285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.847298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.051 [2024-12-15 10:45:11.847352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.847366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.051 [2024-12-15 10:45:11.847424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.847438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.051 #20 NEW cov: 11784 ft: 13865 corp: 15/414b lim: 40 exec/s: 0 rss: 69Mb L: 35/39 MS: 1 CMP- DE: "\023\325V\341"- 00:07:23.051 [2024-12-15 10:45:11.887332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.887356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.051 [2024-12-15 10:45:11.887413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.887431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.051 [2024-12-15 10:45:11.887486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:cdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.887499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.051 [2024-12-15 10:45:11.887554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.887566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.051 #21 NEW cov: 11784 ft: 13882 corp: 16/447b lim: 40 exec/s: 0 rss: 69Mb L: 33/39 MS: 1 ChangeByte- 00:07:23.051 [2024-12-15 10:45:11.917541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.917566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.051 [2024-12-15 10:45:11.917623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.917636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.051 [2024-12-15 10:45:11.917693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:90909000 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.917707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.051 [2024-12-15 10:45:11.917763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.917775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.051 [2024-12-15 10:45:11.917830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:90900000 cdw11:0000003b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.917844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:23.051 #22 NEW cov: 11784 ft: 13986 corp: 17/487b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 CopyPart- 00:07:23.051 [2024-12-15 10:45:11.957262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffff0f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.957286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.051 [2024-12-15 10:45:11.957342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03000000 cdw11:000a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.957355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.051 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:23.051 #23 NEW cov: 11807 ft: 14012 corp: 18/508b lim: 40 exec/s: 0 rss: 70Mb L: 21/40 MS: 1 CMP- DE: "\377\377\377\017"- 00:07:23.051 [2024-12-15 10:45:11.997623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.997647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.051 [2024-12-15 10:45:11.997704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.997718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.051 [2024-12-15 10:45:11.997773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.997784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.051 [2024-12-15 10:45:11.997839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:11.997852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.051 #24 NEW cov: 11807 ft: 14034 corp: 19/543b lim: 40 exec/s: 0 rss: 70Mb L: 35/40 MS: 1 ChangeBit- 00:07:23.051 [2024-12-15 10:45:12.037498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:12.037522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.051 [2024-12-15 10:45:12.037579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03150000 cdw11:000a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.051 [2024-12-15 10:45:12.037595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.051 #25 NEW cov: 11807 ft: 14055 corp: 20/564b lim: 40 exec/s: 25 rss: 70Mb L: 21/40 MS: 1 ChangeBinInt- 00:07:23.310 [2024-12-15 10:45:12.077870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.310 [2024-12-15 10:45:12.077894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.311 [2024-12-15 10:45:12.077950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.077964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.311 [2024-12-15 10:45:12.078021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:90909000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.078034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.311 [2024-12-15 10:45:12.078090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:0000f690 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.078102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.311 #26 NEW cov: 11807 ft: 14070 corp: 21/603b lim: 40 exec/s: 26 rss: 70Mb L: 39/40 MS: 1 CMP- DE: "\000\000\000\366"- 00:07:23.311 [2024-12-15 10:45:12.117754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.117779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.311 [2024-12-15 10:45:12.117837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00060000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.117852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.311 #27 NEW cov: 11807 ft: 14083 corp: 22/622b lim: 40 exec/s: 27 rss: 70Mb L: 19/40 MS: 1 ChangeBinInt- 00:07:23.311 [2024-12-15 10:45:12.157853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.157878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.311 [2024-12-15 10:45:12.157935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03000000 cdw11:000a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.157948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.311 #28 NEW cov: 11807 ft: 14171 corp: 23/643b lim: 40 exec/s: 28 rss: 70Mb L: 21/40 MS: 1 ChangeByte- 00:07:23.311 [2024-12-15 10:45:12.198366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.198391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.311 [2024-12-15 10:45:12.198448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.198462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.311 [2024-12-15 10:45:12.198521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:90909000 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.198534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.311 [2024-12-15 10:45:12.198590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.198603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.311 [2024-12-15 10:45:12.198657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:90900000 cdw11:0000003b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.198670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:23.311 #29 NEW cov: 11807 ft: 14181 corp: 24/683b lim: 40 exec/s: 29 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:23.311 [2024-12-15 10:45:12.238361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.238385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.311 [2024-12-15 10:45:12.238448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.238462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.311 [2024-12-15 10:45:12.238518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff68ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.238531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.311 [2024-12-15 10:45:12.238585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.238597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.311 #30 NEW cov: 11807 ft: 14196 corp: 25/717b lim: 40 exec/s: 30 rss: 70Mb L: 34/40 MS: 1 InsertByte- 00:07:23.311 [2024-12-15 10:45:12.278225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffff0f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.278250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.311 [2024-12-15 10:45:12.278308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.278321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.311 #31 NEW cov: 11807 ft: 14220 corp: 26/738b lim: 40 exec/s: 31 rss: 70Mb L: 21/40 MS: 1 ShuffleBytes- 00:07:23.311 [2024-12-15 10:45:12.318326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.318350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.311 [2024-12-15 10:45:12.318408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03150000 cdw11:000affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.311 [2024-12-15 10:45:12.318430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.570 #32 NEW cov: 11807 ft: 14227 corp: 27/759b lim: 40 exec/s: 32 rss: 70Mb L: 21/40 MS: 1 ChangeBinInt- 00:07:23.570 [2024-12-15 10:45:12.358444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffff0f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.570 [2024-12-15 10:45:12.358468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.570 [2024-12-15 10:45:12.358524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.570 [2024-12-15 10:45:12.358537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.570 #33 NEW cov: 11807 ft: 14250 corp: 28/779b lim: 40 exec/s: 33 rss: 70Mb L: 20/40 MS: 1 PersAutoDict- DE: "\377\377\377\017"- 00:07:23.570 [2024-12-15 10:45:12.398957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.570 [2024-12-15 10:45:12.398981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.571 [2024-12-15 10:45:12.399041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.571 [2024-12-15 10:45:12.399054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.571 [2024-12-15 10:45:12.399111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:90909000 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.571 [2024-12-15 10:45:12.399124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.571 [2024-12-15 10:45:12.399177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.571 [2024-12-15 10:45:12.399190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.571 [2024-12-15 10:45:12.399246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:90900000 cdw11:0000003b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.571 [2024-12-15 10:45:12.399259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:23.571 #34 NEW cov: 11807 ft: 14269 corp: 29/819b lim: 40 exec/s: 34 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:23.571 [2024-12-15 10:45:12.438695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.571 [2024-12-15 10:45:12.438718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.571 [2024-12-15 10:45:12.438777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.571 [2024-12-15 10:45:12.438790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.571 #35 NEW cov: 11807 ft: 14274 corp: 30/840b lim: 40 exec/s: 35 rss: 70Mb L: 21/40 MS: 1 EraseBytes- 00:07:23.571 [2024-12-15 10:45:12.478815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:0f00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.571 [2024-12-15 10:45:12.478839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.571 [2024-12-15 10:45:12.478895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.571 [2024-12-15 10:45:12.478911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.571 #36 NEW cov: 11807 ft: 14287 corp: 31/860b lim: 40 exec/s: 36 rss: 70Mb L: 20/40 MS: 1 ShuffleBytes- 00:07:23.571 [2024-12-15 10:45:12.518917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.571 [2024-12-15 10:45:12.518940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.571 [2024-12-15 10:45:12.518997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03000000 cdw11:000a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.571 [2024-12-15 10:45:12.519010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.571 #37 NEW cov: 11807 ft: 14349 corp: 32/881b lim: 40 exec/s: 37 rss: 70Mb L: 21/40 MS: 1 CrossOver- 00:07:23.571 [2024-12-15 10:45:12.559199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.571 [2024-12-15 10:45:12.559223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.571 [2024-12-15 10:45:12.559279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00004000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.571 [2024-12-15 10:45:12.559293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.571 [2024-12-15 10:45:12.559347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.571 [2024-12-15 10:45:12.559359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.571 #38 NEW cov: 11807 ft: 14540 corp: 33/907b lim: 40 exec/s: 38 rss: 70Mb L: 26/40 MS: 1 EraseBytes- 00:07:23.830 [2024-12-15 10:45:12.599153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00250000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.830 [2024-12-15 10:45:12.599177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.830 [2024-12-15 10:45:12.599247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.830 [2024-12-15 10:45:12.599260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.830 #39 NEW cov: 11807 ft: 14556 corp: 34/927b lim: 40 exec/s: 39 rss: 70Mb L: 20/40 MS: 1 ChangeByte- 00:07:23.830 [2024-12-15 10:45:12.639283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffff0f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.830 [2024-12-15 10:45:12.639307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.830 [2024-12-15 10:45:12.639366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.830 [2024-12-15 10:45:12.639379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.830 #40 NEW cov: 11807 ft: 14575 corp: 35/947b lim: 40 exec/s: 40 rss: 70Mb L: 20/40 MS: 1 CMP- DE: "\000\002\000\000"- 00:07:23.830 [2024-12-15 10:45:12.679775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.830 [2024-12-15 10:45:12.679802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.830 [2024-12-15 10:45:12.679859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff000000 cdw11:00909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.679872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.831 [2024-12-15 10:45:12.679926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:90909000 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.679939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.831 [2024-12-15 10:45:12.679993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.680006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.831 [2024-12-15 10:45:12.680063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:90900000 cdw11:0000003b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.680076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:23.831 #41 NEW cov: 11807 ft: 14587 corp: 36/987b lim: 40 exec/s: 41 rss: 70Mb L: 40/40 MS: 1 CrossOver- 00:07:23.831 [2024-12-15 10:45:12.719783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.719807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.831 [2024-12-15 10:45:12.719864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.719877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.831 [2024-12-15 10:45:12.719932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000100 cdw11:00040000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.719945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.831 [2024-12-15 10:45:12.720001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.720013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.831 #42 NEW cov: 11807 ft: 14620 corp: 37/1022b lim: 40 exec/s: 42 rss: 70Mb L: 35/40 MS: 1 ChangeBinInt- 00:07:23.831 [2024-12-15 10:45:12.759778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.759802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.831 [2024-12-15 10:45:12.759859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03000000 cdw11:000a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.759872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.831 [2024-12-15 10:45:12.759928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:000000de cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.759944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.831 #43 NEW cov: 11807 ft: 14636 corp: 38/1048b lim: 40 exec/s: 43 rss: 70Mb L: 26/40 MS: 1 InsertRepeatedBytes- 00:07:23.831 [2024-12-15 10:45:12.800004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.800027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.831 [2024-12-15 10:45:12.800082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.800096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.831 [2024-12-15 10:45:12.800150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.800163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.831 [2024-12-15 10:45:12.800217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:003b0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.800230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.831 #44 NEW cov: 11807 ft: 14691 corp: 39/1083b lim: 40 exec/s: 44 rss: 70Mb L: 35/40 MS: 1 CopyPart- 00:07:23.831 [2024-12-15 10:45:12.840130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.840154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.831 [2024-12-15 10:45:12.840211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.840224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.831 [2024-12-15 10:45:12.840279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00100040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.840292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.831 [2024-12-15 10:45:12.840347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.831 [2024-12-15 10:45:12.840360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.091 #45 NEW cov: 11807 ft: 14713 corp: 40/1118b lim: 40 exec/s: 45 rss: 70Mb L: 35/40 MS: 1 CrossOver- 00:07:24.091 [2024-12-15 10:45:12.880216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:12.880240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.091 [2024-12-15 10:45:12.880297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2b2b2b00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:12.880311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.091 [2024-12-15 10:45:12.880366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:12.880382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.091 [2024-12-15 10:45:12.880438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:12.880451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.091 #46 NEW cov: 11807 ft: 14721 corp: 41/1157b lim: 40 exec/s: 46 rss: 70Mb L: 39/40 MS: 1 InsertRepeatedBytes- 00:07:24.091 [2024-12-15 10:45:12.910057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:12.910080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.091 [2024-12-15 10:45:12.910137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03150000 cdw11:00320a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:12.910150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.091 #47 NEW cov: 11807 ft: 14764 corp: 42/1179b lim: 40 exec/s: 47 rss: 70Mb L: 22/40 MS: 1 InsertByte- 00:07:24.091 [2024-12-15 10:45:12.950403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:12.950429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.091 [2024-12-15 10:45:12.950485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00020000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:12.950498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.091 [2024-12-15 10:45:12.950553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:12.950566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.091 [2024-12-15 10:45:12.950619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:12.950632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.091 #48 NEW cov: 11807 ft: 14765 corp: 43/1214b lim: 40 exec/s: 48 rss: 70Mb L: 35/40 MS: 1 ChangeBit- 00:07:24.091 [2024-12-15 10:45:12.990660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:12.990684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.091 [2024-12-15 10:45:12.990742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2b2b2b2b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:12.990755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.091 [2024-12-15 10:45:12.990810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00009090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:12.990823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.091 [2024-12-15 10:45:12.990881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:12.990893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.091 [2024-12-15 10:45:12.990948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:90900000 cdw11:0000003b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:12.990962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.091 #49 NEW cov: 11807 ft: 14788 corp: 44/1254b lim: 40 exec/s: 49 rss: 70Mb L: 40/40 MS: 1 CrossOver- 00:07:24.091 [2024-12-15 10:45:13.030646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:13.030671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.091 [2024-12-15 10:45:13.030729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00003f00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:13.030743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.091 [2024-12-15 10:45:13.030798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:13.030811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.091 [2024-12-15 10:45:13.030854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.091 [2024-12-15 10:45:13.030867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.091 #50 NEW cov: 11807 ft: 14802 corp: 45/1290b lim: 40 exec/s: 25 rss: 70Mb L: 36/40 MS: 1 InsertByte- 00:07:24.091 #50 DONE cov: 11807 ft: 14802 corp: 45/1290b lim: 40 exec/s: 25 rss: 70Mb 00:07:24.091 ###### Recommended dictionary. ###### 00:07:24.091 "\023\325V\341" # Uses: 0 00:07:24.091 "\377\377\377\017" # Uses: 1 00:07:24.091 "\000\000\000\366" # Uses: 0 00:07:24.091 "\000\002\000\000" # Uses: 0 00:07:24.091 ###### End of recommended dictionary. ###### 00:07:24.091 Done 50 runs in 2 second(s) 00:07:24.351 10:45:13 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_10.conf 00:07:24.351 10:45:13 -- ../common.sh@72 -- # (( i++ )) 00:07:24.351 10:45:13 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.351 10:45:13 -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:24.351 10:45:13 -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:24.351 10:45:13 -- nvmf/run.sh@24 -- # local timen=1 00:07:24.351 10:45:13 -- nvmf/run.sh@25 -- # local core=0x1 00:07:24.351 10:45:13 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:24.351 10:45:13 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:24.351 10:45:13 -- nvmf/run.sh@29 -- # printf %02d 11 00:07:24.351 10:45:13 -- nvmf/run.sh@29 -- # port=4411 00:07:24.351 10:45:13 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:24.351 10:45:13 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:24.351 10:45:13 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:24.351 10:45:13 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 -r /var/tmp/spdk11.sock 00:07:24.351 [2024-12-15 10:45:13.215921] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:24.351 [2024-12-15 10:45:13.215988] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1311529 ] 00:07:24.351 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.610 [2024-12-15 10:45:13.467700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.610 [2024-12-15 10:45:13.545624] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:24.610 [2024-12-15 10:45:13.545749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.610 [2024-12-15 10:45:13.603571] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.610 [2024-12-15 10:45:13.619854] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:24.869 INFO: Running with entropic power schedule (0xFF, 100). 00:07:24.869 INFO: Seed: 2717098458 00:07:24.869 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:24.869 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:24.869 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:24.869 INFO: A corpus is not provided, starting from an empty corpus 00:07:24.869 #2 INITED exec/s: 0 rss: 60Mb 00:07:24.869 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:24.869 This may also happen if the target rejected all inputs we tried so far 00:07:24.869 [2024-12-15 10:45:13.665100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.869 [2024-12-15 10:45:13.665129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.128 NEW_FUNC[1/671]: 0x4493f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:25.128 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:25.128 #5 NEW cov: 11592 ft: 11593 corp: 2/11b lim: 40 exec/s: 0 rss: 68Mb L: 10/10 MS: 3 CopyPart-CrossOver-InsertRepeatedBytes- 00:07:25.128 [2024-12-15 10:45:13.955733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a0a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.128 [2024-12-15 10:45:13.955766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.128 #6 NEW cov: 11705 ft: 12091 corp: 3/22b lim: 40 exec/s: 0 rss: 69Mb L: 11/11 MS: 1 CrossOver- 00:07:25.128 [2024-12-15 10:45:14.006237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.128 [2024-12-15 10:45:14.006262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.128 [2024-12-15 10:45:14.006317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.128 [2024-12-15 10:45:14.006331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.128 [2024-12-15 10:45:14.006385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.128 [2024-12-15 10:45:14.006398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.128 [2024-12-15 10:45:14.006459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.129 [2024-12-15 10:45:14.006472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.129 #11 NEW cov: 11711 ft: 13055 corp: 4/58b lim: 40 exec/s: 0 rss: 69Mb L: 36/36 MS: 5 ShuffleBytes-ChangeByte-ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:07:25.129 [2024-12-15 10:45:14.045885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a0a2c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.129 [2024-12-15 10:45:14.045909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.129 #12 NEW cov: 11796 ft: 13308 corp: 5/69b lim: 40 exec/s: 0 rss: 69Mb L: 11/36 MS: 1 ChangeByte- 00:07:25.129 [2024-12-15 10:45:14.086502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.129 [2024-12-15 10:45:14.086526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.129 [2024-12-15 10:45:14.086582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.129 [2024-12-15 10:45:14.086595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.129 [2024-12-15 10:45:14.086651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.129 [2024-12-15 10:45:14.086664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.129 [2024-12-15 10:45:14.086719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:f9000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.129 [2024-12-15 10:45:14.086732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.129 #13 NEW cov: 11796 ft: 13398 corp: 6/105b lim: 40 exec/s: 0 rss: 69Mb L: 36/36 MS: 1 ChangeBinInt- 00:07:25.129 [2024-12-15 10:45:14.126426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.129 [2024-12-15 10:45:14.126450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.129 [2024-12-15 10:45:14.126505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.129 [2024-12-15 10:45:14.126519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.129 [2024-12-15 10:45:14.126573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.129 [2024-12-15 10:45:14.126586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.388 #14 NEW cov: 11796 ft: 13721 corp: 7/129b lim: 40 exec/s: 0 rss: 69Mb L: 24/36 MS: 1 EraseBytes- 00:07:25.388 [2024-12-15 10:45:14.166260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a212c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.388 [2024-12-15 10:45:14.166283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.388 #15 NEW cov: 11796 ft: 13786 corp: 8/140b lim: 40 exec/s: 0 rss: 69Mb L: 11/36 MS: 1 ChangeByte- 00:07:25.388 [2024-12-15 10:45:14.206808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.388 [2024-12-15 10:45:14.206833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.388 [2024-12-15 10:45:14.206892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.388 [2024-12-15 10:45:14.206906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.388 [2024-12-15 10:45:14.206960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.388 [2024-12-15 10:45:14.206974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.388 [2024-12-15 10:45:14.207029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.388 [2024-12-15 10:45:14.207043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.388 #16 NEW cov: 11796 ft: 13793 corp: 9/176b lim: 40 exec/s: 0 rss: 69Mb L: 36/36 MS: 1 ShuffleBytes- 00:07:25.388 [2024-12-15 10:45:14.246924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.388 [2024-12-15 10:45:14.246948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.388 [2024-12-15 10:45:14.247005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.388 [2024-12-15 10:45:14.247018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.388 [2024-12-15 10:45:14.247073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.388 [2024-12-15 10:45:14.247085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.388 [2024-12-15 10:45:14.247139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.388 [2024-12-15 10:45:14.247151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.388 #17 NEW cov: 11796 ft: 13811 corp: 10/212b lim: 40 exec/s: 0 rss: 69Mb L: 36/36 MS: 1 CrossOver- 00:07:25.388 [2024-12-15 10:45:14.287018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.388 [2024-12-15 10:45:14.287042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.388 [2024-12-15 10:45:14.287095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:11000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.388 [2024-12-15 10:45:14.287108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.388 [2024-12-15 10:45:14.287161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.388 [2024-12-15 10:45:14.287173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.388 [2024-12-15 10:45:14.287227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:f9000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.389 [2024-12-15 10:45:14.287239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.389 #18 NEW cov: 11796 ft: 13856 corp: 11/248b lim: 40 exec/s: 0 rss: 69Mb L: 36/36 MS: 1 CMP- DE: "\021\000"- 00:07:25.389 [2024-12-15 10:45:14.326692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00dee800 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.389 [2024-12-15 10:45:14.326716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.389 #19 NEW cov: 11796 ft: 13931 corp: 12/260b lim: 40 exec/s: 0 rss: 69Mb L: 12/36 MS: 1 CrossOver- 00:07:25.389 [2024-12-15 10:45:14.367215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.389 [2024-12-15 10:45:14.367239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.389 [2024-12-15 10:45:14.367293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.389 [2024-12-15 10:45:14.367306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.389 [2024-12-15 10:45:14.367360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.389 [2024-12-15 10:45:14.367373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.389 [2024-12-15 10:45:14.367430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.389 [2024-12-15 10:45:14.367442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.389 #20 NEW cov: 11796 ft: 13956 corp: 13/296b lim: 40 exec/s: 0 rss: 69Mb L: 36/36 MS: 1 ChangeBit- 00:07:25.648 [2024-12-15 10:45:14.407502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.648 [2024-12-15 10:45:14.407525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.648 [2024-12-15 10:45:14.407580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.648 [2024-12-15 10:45:14.407593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.648 [2024-12-15 10:45:14.407648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.648 [2024-12-15 10:45:14.407660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.648 [2024-12-15 10:45:14.407714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.648 [2024-12-15 10:45:14.407726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.648 [2024-12-15 10:45:14.407778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.648 [2024-12-15 10:45:14.407791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.648 #21 NEW cov: 11796 ft: 14052 corp: 14/336b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 CopyPart- 00:07:25.648 [2024-12-15 10:45:14.457672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.648 [2024-12-15 10:45:14.457697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.648 [2024-12-15 10:45:14.457755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.648 [2024-12-15 10:45:14.457770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.648 [2024-12-15 10:45:14.457823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.648 [2024-12-15 10:45:14.457836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.648 [2024-12-15 10:45:14.457889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.648 [2024-12-15 10:45:14.457903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.648 [2024-12-15 10:45:14.457954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.648 [2024-12-15 10:45:14.457967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.649 #22 NEW cov: 11796 ft: 14059 corp: 15/376b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 CopyPart- 00:07:25.649 [2024-12-15 10:45:14.507252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00dee800 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.649 [2024-12-15 10:45:14.507277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.649 #23 NEW cov: 11796 ft: 14114 corp: 16/389b lim: 40 exec/s: 0 rss: 70Mb L: 13/40 MS: 1 InsertByte- 00:07:25.649 [2024-12-15 10:45:14.547787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.649 [2024-12-15 10:45:14.547813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.649 [2024-12-15 10:45:14.547883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.649 [2024-12-15 10:45:14.547897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.649 [2024-12-15 10:45:14.547954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.649 [2024-12-15 10:45:14.547967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.649 [2024-12-15 10:45:14.548020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:f9000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.649 [2024-12-15 10:45:14.548033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.649 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:25.649 #24 NEW cov: 11819 ft: 14157 corp: 17/425b lim: 40 exec/s: 0 rss: 70Mb L: 36/40 MS: 1 PersAutoDict- DE: "\021\000"- 00:07:25.649 [2024-12-15 10:45:14.587444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.649 [2024-12-15 10:45:14.587469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.649 #25 NEW cov: 11819 ft: 14181 corp: 18/435b lim: 40 exec/s: 0 rss: 70Mb L: 10/40 MS: 1 PersAutoDict- DE: "\021\000"- 00:07:25.649 [2024-12-15 10:45:14.628015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.649 [2024-12-15 10:45:14.628043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.649 [2024-12-15 10:45:14.628097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.649 [2024-12-15 10:45:14.628110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.649 [2024-12-15 10:45:14.628163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.649 [2024-12-15 10:45:14.628176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.649 [2024-12-15 10:45:14.628229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fd000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.649 [2024-12-15 10:45:14.628242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.649 #26 NEW cov: 11819 ft: 14246 corp: 19/471b lim: 40 exec/s: 26 rss: 70Mb L: 36/40 MS: 1 ChangeBit- 00:07:25.908 [2024-12-15 10:45:14.667976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.908 [2024-12-15 10:45:14.668001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.908 [2024-12-15 10:45:14.668054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:2a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.908 [2024-12-15 10:45:14.668067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.908 [2024-12-15 10:45:14.668116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.908 [2024-12-15 10:45:14.668129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.908 #27 NEW cov: 11819 ft: 14305 corp: 20/496b lim: 40 exec/s: 27 rss: 70Mb L: 25/40 MS: 1 InsertByte- 00:07:25.908 [2024-12-15 10:45:14.707786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a2c00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.908 [2024-12-15 10:45:14.707812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.908 #28 NEW cov: 11819 ft: 14323 corp: 21/506b lim: 40 exec/s: 28 rss: 70Mb L: 10/40 MS: 1 EraseBytes- 00:07:25.908 [2024-12-15 10:45:14.748481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.908 [2024-12-15 10:45:14.748506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.909 [2024-12-15 10:45:14.748562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.748576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.909 [2024-12-15 10:45:14.748631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:dee80000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.748643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.909 [2024-12-15 10:45:14.748698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.748714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.909 [2024-12-15 10:45:14.748768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:fd000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.748780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.909 #29 NEW cov: 11819 ft: 14346 corp: 22/546b lim: 40 exec/s: 29 rss: 70Mb L: 40/40 MS: 1 CopyPart- 00:07:25.909 [2024-12-15 10:45:14.788506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.788531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.909 [2024-12-15 10:45:14.788583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffffff6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.788596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.909 [2024-12-15 10:45:14.788642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.788655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.909 [2024-12-15 10:45:14.788708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fd000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.788720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.909 #30 NEW cov: 11819 ft: 14388 corp: 23/582b lim: 40 exec/s: 30 rss: 70Mb L: 36/40 MS: 1 ChangeBinInt- 00:07:25.909 [2024-12-15 10:45:14.828732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.828756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.909 [2024-12-15 10:45:14.828813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.828826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.909 [2024-12-15 10:45:14.828882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:dee80000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.828895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.909 [2024-12-15 10:45:14.828949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.828962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.909 [2024-12-15 10:45:14.829015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:fd000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.829028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.909 #31 NEW cov: 11819 ft: 14405 corp: 24/622b lim: 40 exec/s: 31 rss: 70Mb L: 40/40 MS: 1 ChangeBit- 00:07:25.909 [2024-12-15 10:45:14.868681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.868708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.909 [2024-12-15 10:45:14.868763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.868777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.909 [2024-12-15 10:45:14.868832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.868844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.909 [2024-12-15 10:45:14.868894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:f9000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.868907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.909 #32 NEW cov: 11819 ft: 14459 corp: 25/658b lim: 40 exec/s: 32 rss: 70Mb L: 36/40 MS: 1 ShuffleBytes- 00:07:25.909 [2024-12-15 10:45:14.908755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:000000cd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.908780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.909 [2024-12-15 10:45:14.908837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.908850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.909 [2024-12-15 10:45:14.908904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.908916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.909 [2024-12-15 10:45:14.908970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.909 [2024-12-15 10:45:14.908983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.168 #33 NEW cov: 11819 ft: 14478 corp: 26/695b lim: 40 exec/s: 33 rss: 70Mb L: 37/40 MS: 1 InsertByte- 00:07:26.168 [2024-12-15 10:45:14.958949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.168 [2024-12-15 10:45:14.958973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.168 [2024-12-15 10:45:14.959028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.168 [2024-12-15 10:45:14.959042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.168 [2024-12-15 10:45:14.959095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.168 [2024-12-15 10:45:14.959108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.168 [2024-12-15 10:45:14.959163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.168 [2024-12-15 10:45:14.959179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.168 #39 NEW cov: 11819 ft: 14549 corp: 27/731b lim: 40 exec/s: 39 rss: 70Mb L: 36/40 MS: 1 PersAutoDict- DE: "\021\000"- 00:07:26.168 [2024-12-15 10:45:14.998593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a0400 cdw11:00dee800 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.168 [2024-12-15 10:45:14.998617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.169 #40 NEW cov: 11819 ft: 14623 corp: 28/743b lim: 40 exec/s: 40 rss: 70Mb L: 12/40 MS: 1 ChangeBit- 00:07:26.169 [2024-12-15 10:45:15.039178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.169 [2024-12-15 10:45:15.039202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.169 [2024-12-15 10:45:15.039257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:dee80000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.169 [2024-12-15 10:45:15.039270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.169 [2024-12-15 10:45:15.039326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000040 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.169 [2024-12-15 10:45:15.039339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.169 [2024-12-15 10:45:15.039393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.169 [2024-12-15 10:45:15.039406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.169 #41 NEW cov: 11819 ft: 14708 corp: 29/779b lim: 40 exec/s: 41 rss: 70Mb L: 36/40 MS: 1 CrossOver- 00:07:26.169 [2024-12-15 10:45:15.078971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.169 [2024-12-15 10:45:15.078995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.169 [2024-12-15 10:45:15.079066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.169 [2024-12-15 10:45:15.079080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.169 #42 NEW cov: 11819 ft: 14902 corp: 30/797b lim: 40 exec/s: 42 rss: 70Mb L: 18/40 MS: 1 EraseBytes- 00:07:26.169 [2024-12-15 10:45:15.119051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.169 [2024-12-15 10:45:15.119076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.169 [2024-12-15 10:45:15.119132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.169 [2024-12-15 10:45:15.119145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.169 #43 NEW cov: 11819 ft: 14914 corp: 31/815b lim: 40 exec/s: 43 rss: 70Mb L: 18/40 MS: 1 CopyPart- 00:07:26.169 [2024-12-15 10:45:15.159494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.169 [2024-12-15 10:45:15.159518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.169 [2024-12-15 10:45:15.159578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:dee80000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.169 [2024-12-15 10:45:15.159592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.169 [2024-12-15 10:45:15.159645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00200040 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.169 [2024-12-15 10:45:15.159657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.169 [2024-12-15 10:45:15.159711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.169 [2024-12-15 10:45:15.159723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.427 #44 NEW cov: 11819 ft: 14925 corp: 32/851b lim: 40 exec/s: 44 rss: 70Mb L: 36/40 MS: 1 ChangeBit- 00:07:26.427 [2024-12-15 10:45:15.199618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:defae800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.199642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.427 [2024-12-15 10:45:15.199698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00110000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.199711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.427 [2024-12-15 10:45:15.199764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.199777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.427 [2024-12-15 10:45:15.199830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fff90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.199842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.427 #45 NEW cov: 11819 ft: 14947 corp: 33/888b lim: 40 exec/s: 45 rss: 70Mb L: 37/40 MS: 1 InsertByte- 00:07:26.427 [2024-12-15 10:45:15.239579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.239604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.427 [2024-12-15 10:45:15.239662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:dee80000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.239676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.427 [2024-12-15 10:45:15.239727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00001100 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.239740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.427 #46 NEW cov: 11819 ft: 14960 corp: 34/915b lim: 40 exec/s: 46 rss: 70Mb L: 27/40 MS: 1 CrossOver- 00:07:26.427 [2024-12-15 10:45:15.279565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:32000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.279589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.427 [2024-12-15 10:45:15.279649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.279662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.427 #47 NEW cov: 11819 ft: 14970 corp: 35/934b lim: 40 exec/s: 47 rss: 70Mb L: 19/40 MS: 1 InsertByte- 00:07:26.427 [2024-12-15 10:45:15.319694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:32000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.319717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.427 [2024-12-15 10:45:15.319788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.319802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.427 #48 NEW cov: 11819 ft: 15031 corp: 36/953b lim: 40 exec/s: 48 rss: 70Mb L: 19/40 MS: 1 ChangeBinInt- 00:07:26.427 [2024-12-15 10:45:15.360116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.360140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.427 [2024-12-15 10:45:15.360194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.360207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.427 [2024-12-15 10:45:15.360263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000f000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.360276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.427 [2024-12-15 10:45:15.360332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.360345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.427 #49 NEW cov: 11819 ft: 15032 corp: 37/989b lim: 40 exec/s: 49 rss: 70Mb L: 36/40 MS: 1 ChangeByte- 00:07:26.427 [2024-12-15 10:45:15.400299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.400323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.427 [2024-12-15 10:45:15.400381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.400394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.427 [2024-12-15 10:45:15.400453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:de00e800 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.400466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.427 [2024-12-15 10:45:15.400521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.400534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.427 [2024-12-15 10:45:15.400590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:fd000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.400603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:26.427 #50 NEW cov: 11819 ft: 15041 corp: 38/1029b lim: 40 exec/s: 50 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:26.427 [2024-12-15 10:45:15.439891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a212c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.427 [2024-12-15 10:45:15.439915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.686 #51 NEW cov: 11819 ft: 15050 corp: 39/1040b lim: 40 exec/s: 51 rss: 70Mb L: 11/40 MS: 1 ChangeBit- 00:07:26.686 [2024-12-15 10:45:15.480404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.686 [2024-12-15 10:45:15.480432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.686 [2024-12-15 10:45:15.480489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.686 [2024-12-15 10:45:15.480502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.686 [2024-12-15 10:45:15.480558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.686 [2024-12-15 10:45:15.480571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.686 [2024-12-15 10:45:15.480627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.686 [2024-12-15 10:45:15.480640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.686 #52 NEW cov: 11819 ft: 15062 corp: 40/1077b lim: 40 exec/s: 52 rss: 70Mb L: 37/40 MS: 1 InsertByte- 00:07:26.686 [2024-12-15 10:45:15.520549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.686 [2024-12-15 10:45:15.520572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.686 [2024-12-15 10:45:15.520644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:de8d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.686 [2024-12-15 10:45:15.520657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.686 [2024-12-15 10:45:15.520711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00200040 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.686 [2024-12-15 10:45:15.520725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.686 [2024-12-15 10:45:15.520780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.686 [2024-12-15 10:45:15.520793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.686 #53 NEW cov: 11819 ft: 15067 corp: 41/1113b lim: 40 exec/s: 53 rss: 70Mb L: 36/40 MS: 1 ChangeByte- 00:07:26.686 [2024-12-15 10:45:15.560656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee60000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.686 [2024-12-15 10:45:15.560683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.686 [2024-12-15 10:45:15.560740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.686 [2024-12-15 10:45:15.560753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.686 [2024-12-15 10:45:15.560809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000f000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.686 [2024-12-15 10:45:15.560822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.686 [2024-12-15 10:45:15.560879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.686 [2024-12-15 10:45:15.560892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.686 #54 NEW cov: 11819 ft: 15080 corp: 42/1149b lim: 40 exec/s: 54 rss: 70Mb L: 36/40 MS: 1 ChangeBinInt- 00:07:26.686 [2024-12-15 10:45:15.600770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:000a0a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.686 [2024-12-15 10:45:15.600795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.686 [2024-12-15 10:45:15.600843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.686 [2024-12-15 10:45:15.600856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.686 [2024-12-15 10:45:15.600914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.686 [2024-12-15 10:45:15.600928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.687 [2024-12-15 10:45:15.600983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:0000dee8 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.687 [2024-12-15 10:45:15.600996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.687 #55 NEW cov: 11819 ft: 15082 corp: 43/1183b lim: 40 exec/s: 55 rss: 71Mb L: 34/40 MS: 1 CrossOver- 00:07:26.687 [2024-12-15 10:45:15.641044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dee80000 cdw11:00110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.687 [2024-12-15 10:45:15.641068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.687 [2024-12-15 10:45:15.641126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.687 [2024-12-15 10:45:15.641139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.687 [2024-12-15 10:45:15.641195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:dee80000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.687 [2024-12-15 10:45:15.641208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.687 [2024-12-15 10:45:15.641262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00ffffff cdw11:ff0000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.687 [2024-12-15 10:45:15.641275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.687 [2024-12-15 10:45:15.641333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:fffff600 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.687 [2024-12-15 10:45:15.641346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:26.687 #56 NEW cov: 11819 ft: 15101 corp: 44/1223b lim: 40 exec/s: 28 rss: 71Mb L: 40/40 MS: 1 CrossOver- 00:07:26.687 #56 DONE cov: 11819 ft: 15101 corp: 44/1223b lim: 40 exec/s: 28 rss: 71Mb 00:07:26.687 ###### Recommended dictionary. ###### 00:07:26.687 "\021\000" # Uses: 3 00:07:26.687 ###### End of recommended dictionary. ###### 00:07:26.687 Done 56 runs in 2 second(s) 00:07:26.947 10:45:15 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_11.conf 00:07:26.947 10:45:15 -- ../common.sh@72 -- # (( i++ )) 00:07:26.947 10:45:15 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:26.947 10:45:15 -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:26.947 10:45:15 -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:26.947 10:45:15 -- nvmf/run.sh@24 -- # local timen=1 00:07:26.947 10:45:15 -- nvmf/run.sh@25 -- # local core=0x1 00:07:26.947 10:45:15 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:26.947 10:45:15 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:26.947 10:45:15 -- nvmf/run.sh@29 -- # printf %02d 12 00:07:26.947 10:45:15 -- nvmf/run.sh@29 -- # port=4412 00:07:26.947 10:45:15 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:26.947 10:45:15 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:26.947 10:45:15 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:26.947 10:45:15 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 -r /var/tmp/spdk12.sock 00:07:26.947 [2024-12-15 10:45:15.826350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.947 [2024-12-15 10:45:15.826426] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1311976 ] 00:07:26.947 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.206 [2024-12-15 10:45:16.092805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.206 [2024-12-15 10:45:16.173692] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:27.206 [2024-12-15 10:45:16.173819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.466 [2024-12-15 10:45:16.232161] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.466 [2024-12-15 10:45:16.248484] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:27.466 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.466 INFO: Seed: 1051134969 00:07:27.466 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:27.466 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:27.466 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:27.466 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.466 #2 INITED exec/s: 0 rss: 61Mb 00:07:27.466 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.466 This may also happen if the target rejected all inputs we tried so far 00:07:27.466 [2024-12-15 10:45:16.304033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.466 [2024-12-15 10:45:16.304061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.466 [2024-12-15 10:45:16.304138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.466 [2024-12-15 10:45:16.304152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.466 [2024-12-15 10:45:16.304207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.466 [2024-12-15 10:45:16.304221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.725 NEW_FUNC[1/671]: 0x44b168 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:27.725 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:27.725 #18 NEW cov: 11590 ft: 11591 corp: 2/26b lim: 40 exec/s: 0 rss: 68Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:27.725 [2024-12-15 10:45:16.604744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71712371 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.725 [2024-12-15 10:45:16.604775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.726 [2024-12-15 10:45:16.604832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.726 [2024-12-15 10:45:16.604845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.726 [2024-12-15 10:45:16.604899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.726 [2024-12-15 10:45:16.604913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.726 #19 NEW cov: 11703 ft: 12070 corp: 3/52b lim: 40 exec/s: 0 rss: 68Mb L: 26/26 MS: 1 InsertByte- 00:07:27.726 [2024-12-15 10:45:16.655060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71237171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.726 [2024-12-15 10:45:16.655086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.726 [2024-12-15 10:45:16.655143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.726 [2024-12-15 10:45:16.655157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.726 [2024-12-15 10:45:16.655212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.726 [2024-12-15 10:45:16.655226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.726 [2024-12-15 10:45:16.655293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.726 [2024-12-15 10:45:16.655322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.726 #20 NEW cov: 11709 ft: 12521 corp: 4/86b lim: 40 exec/s: 0 rss: 68Mb L: 34/34 MS: 1 CrossOver- 00:07:27.726 [2024-12-15 10:45:16.694990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.726 [2024-12-15 10:45:16.695017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.726 [2024-12-15 10:45:16.695076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.726 [2024-12-15 10:45:16.695089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.726 [2024-12-15 10:45:16.695145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.726 [2024-12-15 10:45:16.695158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.726 #26 NEW cov: 11794 ft: 12732 corp: 5/111b lim: 40 exec/s: 0 rss: 68Mb L: 25/34 MS: 1 ShuffleBytes- 00:07:27.726 [2024-12-15 10:45:16.734783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:23717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.726 [2024-12-15 10:45:16.734809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.985 #27 NEW cov: 11794 ft: 13777 corp: 6/123b lim: 40 exec/s: 0 rss: 68Mb L: 12/34 MS: 1 CrossOver- 00:07:27.985 [2024-12-15 10:45:16.775178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.775202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.985 [2024-12-15 10:45:16.775249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.775263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.985 [2024-12-15 10:45:16.775317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.775330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.985 #28 NEW cov: 11794 ft: 13944 corp: 7/149b lim: 40 exec/s: 0 rss: 68Mb L: 26/34 MS: 1 InsertByte- 00:07:27.985 [2024-12-15 10:45:16.814982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:71717971 cdw11:23717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.815007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.985 #29 NEW cov: 11794 ft: 14097 corp: 8/161b lim: 40 exec/s: 0 rss: 68Mb L: 12/34 MS: 1 ChangeBit- 00:07:27.985 [2024-12-15 10:45:16.855607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71237171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.855633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.985 [2024-12-15 10:45:16.855684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.855699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.985 [2024-12-15 10:45:16.855754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.855768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.985 [2024-12-15 10:45:16.855825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.855841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.985 #30 NEW cov: 11794 ft: 14130 corp: 9/200b lim: 40 exec/s: 0 rss: 68Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:27.985 [2024-12-15 10:45:16.895561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.895585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.985 [2024-12-15 10:45:16.895641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:7171719c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.895655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.985 [2024-12-15 10:45:16.895709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.895723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.985 #31 NEW cov: 11794 ft: 14152 corp: 10/226b lim: 40 exec/s: 0 rss: 69Mb L: 26/39 MS: 1 InsertByte- 00:07:27.985 [2024-12-15 10:45:16.935664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:3d0a7171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.935689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.985 [2024-12-15 10:45:16.935746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.935760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.985 [2024-12-15 10:45:16.935813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.935827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.985 #32 NEW cov: 11794 ft: 14249 corp: 11/252b lim: 40 exec/s: 0 rss: 69Mb L: 26/39 MS: 1 InsertByte- 00:07:27.985 [2024-12-15 10:45:16.975937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71237171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.975962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.985 [2024-12-15 10:45:16.976020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.976034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.985 [2024-12-15 10:45:16.976089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.976102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.985 [2024-12-15 10:45:16.976156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:71717171 cdw11:71607171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.985 [2024-12-15 10:45:16.976170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.985 #33 NEW cov: 11794 ft: 14279 corp: 12/286b lim: 40 exec/s: 0 rss: 69Mb L: 34/39 MS: 1 ChangeByte- 00:07:28.245 [2024-12-15 10:45:17.016034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71237151 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.245 [2024-12-15 10:45:17.016061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.245 [2024-12-15 10:45:17.016118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.245 [2024-12-15 10:45:17.016132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.245 [2024-12-15 10:45:17.016185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.245 [2024-12-15 10:45:17.016198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.245 [2024-12-15 10:45:17.016252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.245 [2024-12-15 10:45:17.016265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.245 #34 NEW cov: 11794 ft: 14315 corp: 13/325b lim: 40 exec/s: 0 rss: 69Mb L: 39/39 MS: 1 ChangeBit- 00:07:28.245 [2024-12-15 10:45:17.056149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71237151 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.245 [2024-12-15 10:45:17.056172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.245 [2024-12-15 10:45:17.056232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.245 [2024-12-15 10:45:17.056245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.245 [2024-12-15 10:45:17.056301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:27717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.245 [2024-12-15 10:45:17.056314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.245 [2024-12-15 10:45:17.056370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.245 [2024-12-15 10:45:17.056382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.245 #35 NEW cov: 11794 ft: 14359 corp: 14/364b lim: 40 exec/s: 0 rss: 69Mb L: 39/39 MS: 1 ChangeByte- 00:07:28.246 [2024-12-15 10:45:17.096123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.246 [2024-12-15 10:45:17.096146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.246 [2024-12-15 10:45:17.096202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.246 [2024-12-15 10:45:17.096215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.246 [2024-12-15 10:45:17.096272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.246 [2024-12-15 10:45:17.096285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.246 #36 NEW cov: 11794 ft: 14414 corp: 15/389b lim: 40 exec/s: 0 rss: 69Mb L: 25/39 MS: 1 ShuffleBytes- 00:07:28.246 [2024-12-15 10:45:17.135924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7f7f7f7f cdw11:7f7f7f7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.246 [2024-12-15 10:45:17.135951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.246 #40 NEW cov: 11794 ft: 14460 corp: 16/400b lim: 40 exec/s: 0 rss: 69Mb L: 11/39 MS: 4 CopyPart-CopyPart-EraseBytes-InsertRepeatedBytes- 00:07:28.246 [2024-12-15 10:45:17.176500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71237171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.246 [2024-12-15 10:45:17.176525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.246 [2024-12-15 10:45:17.176581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.246 [2024-12-15 10:45:17.176594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.246 [2024-12-15 10:45:17.176651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.246 [2024-12-15 10:45:17.176664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.246 [2024-12-15 10:45:17.176719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:71010171 cdw11:71607171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.246 [2024-12-15 10:45:17.176732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.246 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:28.246 #41 NEW cov: 11817 ft: 14506 corp: 17/434b lim: 40 exec/s: 0 rss: 69Mb L: 34/39 MS: 1 CMP- DE: "\001\001"- 00:07:28.246 [2024-12-15 10:45:17.216681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.246 [2024-12-15 10:45:17.216705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.246 [2024-12-15 10:45:17.216761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:7171710e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.246 [2024-12-15 10:45:17.216775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.246 [2024-12-15 10:45:17.216830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000071 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.246 [2024-12-15 10:45:17.216844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.246 [2024-12-15 10:45:17.216900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.246 [2024-12-15 10:45:17.216913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.246 #42 NEW cov: 11817 ft: 14551 corp: 18/467b lim: 40 exec/s: 0 rss: 69Mb L: 33/39 MS: 1 CMP- DE: "\016\000\000\000\000\000\000\000"- 00:07:28.246 [2024-12-15 10:45:17.256654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:3d0a7171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.246 [2024-12-15 10:45:17.256678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.246 [2024-12-15 10:45:17.256736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.246 [2024-12-15 10:45:17.256752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.246 [2024-12-15 10:45:17.256808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:8e717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.246 [2024-12-15 10:45:17.256822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.505 #43 NEW cov: 11817 ft: 14564 corp: 19/493b lim: 40 exec/s: 43 rss: 69Mb L: 26/39 MS: 1 ChangeBinInt- 00:07:28.505 [2024-12-15 10:45:17.296903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71237171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.505 [2024-12-15 10:45:17.296929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.505 [2024-12-15 10:45:17.296986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.505 [2024-12-15 10:45:17.297000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.505 [2024-12-15 10:45:17.297055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.505 [2024-12-15 10:45:17.297069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.505 [2024-12-15 10:45:17.297124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:01017171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.505 [2024-12-15 10:45:17.297137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.505 #44 NEW cov: 11817 ft: 14583 corp: 20/529b lim: 40 exec/s: 44 rss: 69Mb L: 36/39 MS: 1 PersAutoDict- DE: "\001\001"- 00:07:28.505 [2024-12-15 10:45:17.337013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71237171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.505 [2024-12-15 10:45:17.337037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.505 [2024-12-15 10:45:17.337092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:34717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.506 [2024-12-15 10:45:17.337106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.506 [2024-12-15 10:45:17.337161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.506 [2024-12-15 10:45:17.337175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.506 [2024-12-15 10:45:17.337226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.506 [2024-12-15 10:45:17.337239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.506 #45 NEW cov: 11817 ft: 14612 corp: 21/564b lim: 40 exec/s: 45 rss: 69Mb L: 35/39 MS: 1 InsertByte- 00:07:28.506 [2024-12-15 10:45:17.376971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71237151 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.506 [2024-12-15 10:45:17.376996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.506 [2024-12-15 10:45:17.377052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.506 [2024-12-15 10:45:17.377069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.506 [2024-12-15 10:45:17.377122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71710a71 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.506 [2024-12-15 10:45:17.377135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.506 #46 NEW cov: 11817 ft: 14631 corp: 22/595b lim: 40 exec/s: 46 rss: 69Mb L: 31/39 MS: 1 CrossOver- 00:07:28.506 [2024-12-15 10:45:17.417225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71237171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.506 [2024-12-15 10:45:17.417249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.506 [2024-12-15 10:45:17.417303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.506 [2024-12-15 10:45:17.417317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.506 [2024-12-15 10:45:17.417370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.506 [2024-12-15 10:45:17.417400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.506 [2024-12-15 10:45:17.417456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.506 [2024-12-15 10:45:17.417470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.506 #47 NEW cov: 11817 ft: 14654 corp: 23/629b lim: 40 exec/s: 47 rss: 69Mb L: 34/39 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:28.506 [2024-12-15 10:45:17.456861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:23717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.506 [2024-12-15 10:45:17.456885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.506 #48 NEW cov: 11817 ft: 14667 corp: 24/641b lim: 40 exec/s: 48 rss: 69Mb L: 12/39 MS: 1 ShuffleBytes- 00:07:28.506 [2024-12-15 10:45:17.496973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:29237171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.506 [2024-12-15 10:45:17.496997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.506 #49 NEW cov: 11817 ft: 14706 corp: 25/654b lim: 40 exec/s: 49 rss: 69Mb L: 13/39 MS: 1 InsertByte- 00:07:28.765 [2024-12-15 10:45:17.537795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71237151 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.765 [2024-12-15 10:45:17.537821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.765 [2024-12-15 10:45:17.537876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.765 [2024-12-15 10:45:17.537890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.765 [2024-12-15 10:45:17.537943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.765 [2024-12-15 10:45:17.537957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.765 [2024-12-15 10:45:17.538014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:71237171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.766 [2024-12-15 10:45:17.538027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.766 [2024-12-15 10:45:17.538084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:717a7a7a cdw11:7a7a7171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.766 [2024-12-15 10:45:17.538097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:28.766 #50 NEW cov: 11817 ft: 14773 corp: 26/694b lim: 40 exec/s: 50 rss: 69Mb L: 40/40 MS: 1 InsertByte- 00:07:28.766 [2024-12-15 10:45:17.577248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:23717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.766 [2024-12-15 10:45:17.577273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.766 #51 NEW cov: 11817 ft: 14797 corp: 27/707b lim: 40 exec/s: 51 rss: 69Mb L: 13/40 MS: 1 InsertByte- 00:07:28.766 [2024-12-15 10:45:17.618020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71237151 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.766 [2024-12-15 10:45:17.618044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.766 [2024-12-15 10:45:17.618099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.766 [2024-12-15 10:45:17.618112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.766 [2024-12-15 10:45:17.618166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.766 [2024-12-15 10:45:17.618179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.766 [2024-12-15 10:45:17.618231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:71237171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.766 [2024-12-15 10:45:17.618244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.766 [2024-12-15 10:45:17.618298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:717a7a7a cdw11:7a7a7171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.766 [2024-12-15 10:45:17.618311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:28.766 #52 NEW cov: 11817 ft: 14802 corp: 28/747b lim: 40 exec/s: 52 rss: 69Mb L: 40/40 MS: 1 CrossOver- 00:07:28.766 [2024-12-15 10:45:17.657667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.766 [2024-12-15 10:45:17.657693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.766 [2024-12-15 10:45:17.657751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.766 [2024-12-15 10:45:17.657765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.766 #53 NEW cov: 11817 ft: 15063 corp: 29/767b lim: 40 exec/s: 53 rss: 69Mb L: 20/40 MS: 1 EraseBytes- 00:07:28.766 [2024-12-15 10:45:17.697756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.766 [2024-12-15 10:45:17.697781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.766 [2024-12-15 10:45:17.697838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.766 [2024-12-15 10:45:17.697852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.766 #54 NEW cov: 11817 ft: 15152 corp: 30/783b lim: 40 exec/s: 54 rss: 69Mb L: 16/40 MS: 1 EraseBytes- 00:07:28.766 [2024-12-15 10:45:17.738001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.766 [2024-12-15 10:45:17.738025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.766 [2024-12-15 10:45:17.738081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717161 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.766 [2024-12-15 10:45:17.738095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.766 [2024-12-15 10:45:17.738150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.766 [2024-12-15 10:45:17.738163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.766 #55 NEW cov: 11817 ft: 15180 corp: 31/808b lim: 40 exec/s: 55 rss: 69Mb L: 25/40 MS: 1 ChangeBit- 00:07:28.766 [2024-12-15 10:45:17.778020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.766 [2024-12-15 10:45:17.778045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.766 [2024-12-15 10:45:17.778101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.766 [2024-12-15 10:45:17.778115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.025 #56 NEW cov: 11817 ft: 15208 corp: 32/829b lim: 40 exec/s: 56 rss: 69Mb L: 21/40 MS: 1 InsertByte- 00:07:29.025 [2024-12-15 10:45:17.818440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71237171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.025 [2024-12-15 10:45:17.818465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.025 [2024-12-15 10:45:17.818523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.025 [2024-12-15 10:45:17.818536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.025 [2024-12-15 10:45:17.818592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.025 [2024-12-15 10:45:17.818605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.025 [2024-12-15 10:45:17.818659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:71717171 cdw11:868e8e8e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.025 [2024-12-15 10:45:17.818672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.025 #57 NEW cov: 11817 ft: 15212 corp: 33/863b lim: 40 exec/s: 57 rss: 70Mb L: 34/40 MS: 1 ChangeBinInt- 00:07:29.025 [2024-12-15 10:45:17.858073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:375e2566 cdw11:cf8d0400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.025 [2024-12-15 10:45:17.858100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.025 #62 NEW cov: 11817 ft: 15223 corp: 34/872b lim: 40 exec/s: 62 rss: 70Mb L: 9/40 MS: 5 CrossOver-ChangeByte-CopyPart-ChangeByte-CMP- DE: "7^%f\317\215\004\000"- 00:07:29.025 [2024-12-15 10:45:17.898157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7f7f7f81 cdw11:7f7f7f7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.025 [2024-12-15 10:45:17.898182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.025 #63 NEW cov: 11817 ft: 15242 corp: 35/883b lim: 40 exec/s: 63 rss: 70Mb L: 11/40 MS: 1 ChangeBinInt- 00:07:29.025 [2024-12-15 10:45:17.938797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71217151 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.025 [2024-12-15 10:45:17.938821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.025 [2024-12-15 10:45:17.938876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.025 [2024-12-15 10:45:17.938889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.025 [2024-12-15 10:45:17.938944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.025 [2024-12-15 10:45:17.938957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.025 [2024-12-15 10:45:17.939011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.025 [2024-12-15 10:45:17.939023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.025 #64 NEW cov: 11817 ft: 15252 corp: 36/922b lim: 40 exec/s: 64 rss: 70Mb L: 39/40 MS: 1 ChangeBit- 00:07:29.026 [2024-12-15 10:45:17.978915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71237171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.026 [2024-12-15 10:45:17.978940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.026 [2024-12-15 10:45:17.979014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.026 [2024-12-15 10:45:17.979027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.026 [2024-12-15 10:45:17.979082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.026 [2024-12-15 10:45:17.979096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.026 [2024-12-15 10:45:17.979152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:26717171 cdw11:71868e8e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.026 [2024-12-15 10:45:17.979165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.026 #65 NEW cov: 11817 ft: 15259 corp: 37/957b lim: 40 exec/s: 65 rss: 70Mb L: 35/40 MS: 1 InsertByte- 00:07:29.026 [2024-12-15 10:45:18.018497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:23710a71 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.026 [2024-12-15 10:45:18.018522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.285 #66 NEW cov: 11817 ft: 15283 corp: 38/966b lim: 40 exec/s: 66 rss: 70Mb L: 9/40 MS: 1 CrossOver- 00:07:29.285 [2024-12-15 10:45:18.059124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71237171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.285 [2024-12-15 10:45:18.059149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.285 [2024-12-15 10:45:18.059221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.285 [2024-12-15 10:45:18.059235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.285 [2024-12-15 10:45:18.059291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717571 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.285 [2024-12-15 10:45:18.059304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.285 [2024-12-15 10:45:18.059359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:71717171 cdw11:868e8e8e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.285 [2024-12-15 10:45:18.059372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.285 #67 NEW cov: 11817 ft: 15301 corp: 39/1000b lim: 40 exec/s: 67 rss: 70Mb L: 34/40 MS: 1 ChangeBit- 00:07:29.285 [2024-12-15 10:45:18.099346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:7123717a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.285 [2024-12-15 10:45:18.099370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.285 [2024-12-15 10:45:18.099431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.286 [2024-12-15 10:45:18.099445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.286 [2024-12-15 10:45:18.099501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.286 [2024-12-15 10:45:18.099515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.286 [2024-12-15 10:45:18.099570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:71237171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.286 [2024-12-15 10:45:18.099583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.286 [2024-12-15 10:45:18.099638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:717a7a7a cdw11:7a7a7171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.286 [2024-12-15 10:45:18.099651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:29.286 #68 NEW cov: 11817 ft: 15303 corp: 40/1040b lim: 40 exec/s: 68 rss: 70Mb L: 40/40 MS: 1 ChangeByte- 00:07:29.286 [2024-12-15 10:45:18.139151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71716868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.286 [2024-12-15 10:45:18.139175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.286 [2024-12-15 10:45:18.139230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:68686868 cdw11:68687171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.286 [2024-12-15 10:45:18.139244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.286 [2024-12-15 10:45:18.139302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.286 [2024-12-15 10:45:18.139316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.286 #69 NEW cov: 11817 ft: 15317 corp: 41/1069b lim: 40 exec/s: 69 rss: 70Mb L: 29/40 MS: 1 InsertRepeatedBytes- 00:07:29.286 [2024-12-15 10:45:18.179125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.286 [2024-12-15 10:45:18.179149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.286 [2024-12-15 10:45:18.179220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.286 [2024-12-15 10:45:18.179233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.286 #70 NEW cov: 11817 ft: 15335 corp: 42/1090b lim: 40 exec/s: 70 rss: 70Mb L: 21/40 MS: 1 ChangeByte- 00:07:29.286 [2024-12-15 10:45:18.219552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71237171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.286 [2024-12-15 10:45:18.219576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.286 [2024-12-15 10:45:18.219634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00010100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.286 [2024-12-15 10:45:18.219647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.286 [2024-12-15 10:45:18.219703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.286 [2024-12-15 10:45:18.219717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.286 [2024-12-15 10:45:18.219771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.286 [2024-12-15 10:45:18.219784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.286 #71 NEW cov: 11817 ft: 15339 corp: 43/1124b lim: 40 exec/s: 71 rss: 70Mb L: 34/40 MS: 1 PersAutoDict- DE: "\001\001"- 00:07:29.286 [2024-12-15 10:45:18.259549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71010002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.286 [2024-12-15 10:45:18.259573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.286 [2024-12-15 10:45:18.259647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00686868 cdw11:68687171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.286 [2024-12-15 10:45:18.259661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.286 [2024-12-15 10:45:18.259718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.286 [2024-12-15 10:45:18.259732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.286 #72 NEW cov: 11817 ft: 15349 corp: 44/1153b lim: 40 exec/s: 36 rss: 70Mb L: 29/40 MS: 1 CMP- DE: "\001\000\002\000"- 00:07:29.286 #72 DONE cov: 11817 ft: 15349 corp: 44/1153b lim: 40 exec/s: 36 rss: 70Mb 00:07:29.286 ###### Recommended dictionary. ###### 00:07:29.286 "\001\001" # Uses: 2 00:07:29.286 "\016\000\000\000\000\000\000\000" # Uses: 0 00:07:29.286 "\000\000\000\000\000\000\000\000" # Uses: 0 00:07:29.286 "7^%f\317\215\004\000" # Uses: 0 00:07:29.286 "\001\000\002\000" # Uses: 0 00:07:29.286 ###### End of recommended dictionary. ###### 00:07:29.286 Done 72 runs in 2 second(s) 00:07:29.545 10:45:18 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_12.conf 00:07:29.545 10:45:18 -- ../common.sh@72 -- # (( i++ )) 00:07:29.545 10:45:18 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.545 10:45:18 -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:29.545 10:45:18 -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:29.545 10:45:18 -- nvmf/run.sh@24 -- # local timen=1 00:07:29.545 10:45:18 -- nvmf/run.sh@25 -- # local core=0x1 00:07:29.545 10:45:18 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:29.545 10:45:18 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:29.545 10:45:18 -- nvmf/run.sh@29 -- # printf %02d 13 00:07:29.545 10:45:18 -- nvmf/run.sh@29 -- # port=4413 00:07:29.545 10:45:18 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:29.545 10:45:18 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:29.545 10:45:18 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:29.545 10:45:18 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 -r /var/tmp/spdk13.sock 00:07:29.545 [2024-12-15 10:45:18.451193] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.545 [2024-12-15 10:45:18.451269] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1312519 ] 00:07:29.545 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.805 [2024-12-15 10:45:18.705667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.805 [2024-12-15 10:45:18.789570] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:29.805 [2024-12-15 10:45:18.789693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.063 [2024-12-15 10:45:18.847674] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.064 [2024-12-15 10:45:18.863976] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:30.064 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.064 INFO: Seed: 3665119090 00:07:30.064 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:30.064 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:30.064 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:30.064 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.064 #2 INITED exec/s: 0 rss: 60Mb 00:07:30.064 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.064 This may also happen if the target rejected all inputs we tried so far 00:07:30.064 [2024-12-15 10:45:18.909618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0abdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.064 [2024-12-15 10:45:18.909645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.064 [2024-12-15 10:45:18.909706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.064 [2024-12-15 10:45:18.909720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.064 [2024-12-15 10:45:18.909778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.064 [2024-12-15 10:45:18.909794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.064 [2024-12-15 10:45:18.909849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.064 [2024-12-15 10:45:18.909862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.323 NEW_FUNC[1/670]: 0x44cd38 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:30.323 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:30.323 #3 NEW cov: 11570 ft: 11579 corp: 2/36b lim: 40 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:30.323 [2024-12-15 10:45:19.240772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0abdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.323 [2024-12-15 10:45:19.240820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.323 #4 NEW cov: 11691 ft: 12989 corp: 3/50b lim: 40 exec/s: 0 rss: 68Mb L: 14/35 MS: 1 CrossOver- 00:07:30.323 [2024-12-15 10:45:19.291173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:08a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.323 [2024-12-15 10:45:19.291202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.323 [2024-12-15 10:45:19.291335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.323 [2024-12-15 10:45:19.291355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.323 [2024-12-15 10:45:19.291475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.323 [2024-12-15 10:45:19.291492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.323 #14 NEW cov: 11697 ft: 13321 corp: 4/77b lim: 40 exec/s: 0 rss: 68Mb L: 27/35 MS: 5 ChangeBinInt-CrossOver-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:07:30.323 [2024-12-15 10:45:19.331172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:08a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.323 [2024-12-15 10:45:19.331200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.323 [2024-12-15 10:45:19.331332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.323 [2024-12-15 10:45:19.331350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.582 #15 NEW cov: 11782 ft: 13738 corp: 5/98b lim: 40 exec/s: 0 rss: 68Mb L: 21/35 MS: 1 EraseBytes- 00:07:30.582 [2024-12-15 10:45:19.381746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.582 [2024-12-15 10:45:19.381774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.582 [2024-12-15 10:45:19.381907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.582 [2024-12-15 10:45:19.381925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.582 [2024-12-15 10:45:19.382053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.582 [2024-12-15 10:45:19.382070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.582 [2024-12-15 10:45:19.382195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.582 [2024-12-15 10:45:19.382212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.582 #18 NEW cov: 11782 ft: 13850 corp: 6/134b lim: 40 exec/s: 0 rss: 68Mb L: 36/36 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:07:30.582 [2024-12-15 10:45:19.421861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:08c0c0c0 cdw11:c0c0c0c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.582 [2024-12-15 10:45:19.421889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.582 [2024-12-15 10:45:19.422025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c0c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.582 [2024-12-15 10:45:19.422042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.582 [2024-12-15 10:45:19.422171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:c0c0a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.582 [2024-12-15 10:45:19.422189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.582 [2024-12-15 10:45:19.422313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.582 [2024-12-15 10:45:19.422329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.582 #19 NEW cov: 11782 ft: 13934 corp: 7/172b lim: 40 exec/s: 0 rss: 68Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:30.582 [2024-12-15 10:45:19.471985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0abdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.582 [2024-12-15 10:45:19.472013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.582 [2024-12-15 10:45:19.472138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.582 [2024-12-15 10:45:19.472156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.582 [2024-12-15 10:45:19.472285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.582 [2024-12-15 10:45:19.472302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.582 [2024-12-15 10:45:19.472442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:bdbdbdbd cdw11:bdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.582 [2024-12-15 10:45:19.472460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.582 #20 NEW cov: 11782 ft: 14050 corp: 8/211b lim: 40 exec/s: 0 rss: 68Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:30.582 [2024-12-15 10:45:19.521439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0abdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.582 [2024-12-15 10:45:19.521469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.582 #21 NEW cov: 11782 ft: 14083 corp: 9/226b lim: 40 exec/s: 0 rss: 68Mb L: 15/39 MS: 1 CrossOver- 00:07:30.582 [2024-12-15 10:45:19.571917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:08a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.582 [2024-12-15 10:45:19.571944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.582 [2024-12-15 10:45:19.572074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.582 [2024-12-15 10:45:19.572091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.582 #22 NEW cov: 11782 ft: 14103 corp: 10/247b lim: 40 exec/s: 0 rss: 68Mb L: 21/39 MS: 1 CopyPart- 00:07:30.842 [2024-12-15 10:45:19.612446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0abdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.612473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.612607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.612626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.612754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.612771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.612896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.612912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.842 #23 NEW cov: 11782 ft: 14147 corp: 11/282b lim: 40 exec/s: 0 rss: 68Mb L: 35/39 MS: 1 CrossOver- 00:07:30.842 [2024-12-15 10:45:19.652068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.652095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.652219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.652236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.842 #24 NEW cov: 11782 ft: 14181 corp: 12/303b lim: 40 exec/s: 0 rss: 68Mb L: 21/39 MS: 1 ChangeBit- 00:07:30.842 [2024-12-15 10:45:19.692654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.692680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.692799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.692816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.692949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.692965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.693094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.693110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.842 #25 NEW cov: 11782 ft: 14284 corp: 13/339b lim: 40 exec/s: 0 rss: 68Mb L: 36/39 MS: 1 CrossOver- 00:07:30.842 [2024-12-15 10:45:19.732797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0abdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.732823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.732944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.732961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.733095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.733111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.733236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.733251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.842 #26 NEW cov: 11782 ft: 14380 corp: 14/374b lim: 40 exec/s: 0 rss: 69Mb L: 35/39 MS: 1 ShuffleBytes- 00:07:30.842 [2024-12-15 10:45:19.773073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.773098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.773231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.773247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.773380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.773398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.773545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffff3f cdw11:3f3f3fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.773563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.773683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff02 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.773700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:30.842 #27 NEW cov: 11782 ft: 14429 corp: 15/414b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:30.842 [2024-12-15 10:45:19.812524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0abdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.812551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.812680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bd410abd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.812696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.842 #28 NEW cov: 11782 ft: 14445 corp: 16/430b lim: 40 exec/s: 0 rss: 69Mb L: 16/40 MS: 1 InsertByte- 00:07:30.842 [2024-12-15 10:45:19.853112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:08c0c0c0 cdw11:c0c0c0c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.853137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.853260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c0c025c0 cdw11:c0c0c0c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.853276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.853403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:c0c0c0a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.853424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.842 [2024-12-15 10:45:19.853551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.842 [2024-12-15 10:45:19.853567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.102 #29 NEW cov: 11782 ft: 14523 corp: 17/469b lim: 40 exec/s: 0 rss: 69Mb L: 39/40 MS: 1 InsertByte- 00:07:31.102 [2024-12-15 10:45:19.893261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.102 [2024-12-15 10:45:19.893289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.102 [2024-12-15 10:45:19.893408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.102 [2024-12-15 10:45:19.893430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.102 [2024-12-15 10:45:19.893560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.102 [2024-12-15 10:45:19.893576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.102 [2024-12-15 10:45:19.893704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:a1fba1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.102 [2024-12-15 10:45:19.893722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.102 #30 NEW cov: 11782 ft: 14532 corp: 18/506b lim: 40 exec/s: 30 rss: 69Mb L: 37/40 MS: 1 InsertByte- 00:07:31.102 [2024-12-15 10:45:19.932931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:08a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.102 [2024-12-15 10:45:19.932961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.102 [2024-12-15 10:45:19.933088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.102 [2024-12-15 10:45:19.933106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.102 #31 NEW cov: 11782 ft: 14541 corp: 19/527b lim: 40 exec/s: 31 rss: 69Mb L: 21/40 MS: 1 ShuffleBytes- 00:07:31.102 [2024-12-15 10:45:19.973045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:08a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.102 [2024-12-15 10:45:19.973073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.102 [2024-12-15 10:45:19.973213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.102 [2024-12-15 10:45:19.973230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.102 #32 NEW cov: 11782 ft: 14554 corp: 20/548b lim: 40 exec/s: 32 rss: 69Mb L: 21/40 MS: 1 ShuffleBytes- 00:07:31.102 [2024-12-15 10:45:20.013004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00a11500 cdw11:0000a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.102 [2024-12-15 10:45:20.013031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.102 [2024-12-15 10:45:20.013168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.102 [2024-12-15 10:45:20.013185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.102 #33 NEW cov: 11782 ft: 14567 corp: 21/569b lim: 40 exec/s: 33 rss: 69Mb L: 21/40 MS: 1 ChangeBinInt- 00:07:31.102 [2024-12-15 10:45:20.073225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0abdbd cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.102 [2024-12-15 10:45:20.073253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.102 #34 NEW cov: 11782 ft: 14760 corp: 22/584b lim: 40 exec/s: 34 rss: 69Mb L: 15/40 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:31.102 [2024-12-15 10:45:20.113571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0abdbd cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.102 [2024-12-15 10:45:20.113599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.102 [2024-12-15 10:45:20.113726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.102 [2024-12-15 10:45:20.113743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.362 #35 NEW cov: 11782 ft: 14801 corp: 23/606b lim: 40 exec/s: 35 rss: 69Mb L: 22/40 MS: 1 InsertRepeatedBytes- 00:07:31.362 [2024-12-15 10:45:20.153513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:08a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.153555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.362 [2024-12-15 10:45:20.153655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a1a1ffff cdw11:ffffa1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.153676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.362 #36 NEW cov: 11782 ft: 14857 corp: 24/627b lim: 40 exec/s: 36 rss: 69Mb L: 21/40 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:07:31.362 [2024-12-15 10:45:20.194104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.194132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.362 [2024-12-15 10:45:20.194262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.194280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.362 [2024-12-15 10:45:20.194374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.194392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.362 [2024-12-15 10:45:20.194535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.194561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.362 #37 NEW cov: 11782 ft: 14882 corp: 25/666b lim: 40 exec/s: 37 rss: 69Mb L: 39/40 MS: 1 InsertRepeatedBytes- 00:07:31.362 [2024-12-15 10:45:20.234502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0abdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.234529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.362 [2024-12-15 10:45:20.234664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:2a2a2a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.234681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.362 [2024-12-15 10:45:20.234818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2abdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.234835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.362 [2024-12-15 10:45:20.234970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.234987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.362 [2024-12-15 10:45:20.235117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.235135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:31.362 #38 NEW cov: 11782 ft: 14897 corp: 26/706b lim: 40 exec/s: 38 rss: 69Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:31.362 [2024-12-15 10:45:20.284656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.284683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.362 [2024-12-15 10:45:20.284807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.284845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.362 [2024-12-15 10:45:20.284976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.284992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.362 [2024-12-15 10:45:20.285136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ff2fff3f cdw11:3f3f3fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.285153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.362 [2024-12-15 10:45:20.285273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff02 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.285291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:31.362 #39 NEW cov: 11782 ft: 14927 corp: 27/746b lim: 40 exec/s: 39 rss: 69Mb L: 40/40 MS: 1 ChangeByte- 00:07:31.362 [2024-12-15 10:45:20.333907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0abdbd cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.333934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.362 #40 NEW cov: 11782 ft: 14949 corp: 28/761b lim: 40 exec/s: 40 rss: 69Mb L: 15/40 MS: 1 ChangeBinInt- 00:07:31.362 [2024-12-15 10:45:20.374628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0abdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.374655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.362 [2024-12-15 10:45:20.374779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.374795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.362 [2024-12-15 10:45:20.374923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.374940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.362 [2024-12-15 10:45:20.375070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:bdbdbdbd cdw11:bdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.362 [2024-12-15 10:45:20.375087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.621 #41 NEW cov: 11782 ft: 14984 corp: 29/800b lim: 40 exec/s: 41 rss: 69Mb L: 39/40 MS: 1 ShuffleBytes- 00:07:31.621 [2024-12-15 10:45:20.424327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00a11500 cdw11:0000a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.621 [2024-12-15 10:45:20.424354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.622 [2024-12-15 10:45:20.424491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.622 [2024-12-15 10:45:20.424508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.622 #42 NEW cov: 11782 ft: 15002 corp: 30/822b lim: 40 exec/s: 42 rss: 69Mb L: 22/40 MS: 1 InsertByte- 00:07:31.622 [2024-12-15 10:45:20.464503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:08a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.622 [2024-12-15 10:45:20.464529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.622 [2024-12-15 10:45:20.464653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.622 [2024-12-15 10:45:20.464668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.622 #43 NEW cov: 11782 ft: 15018 corp: 31/842b lim: 40 exec/s: 43 rss: 69Mb L: 20/40 MS: 1 EraseBytes- 00:07:31.622 [2024-12-15 10:45:20.505233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.622 [2024-12-15 10:45:20.505260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.622 [2024-12-15 10:45:20.505401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.622 [2024-12-15 10:45:20.505424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.622 [2024-12-15 10:45:20.505541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.622 [2024-12-15 10:45:20.505558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.622 [2024-12-15 10:45:20.505679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.622 [2024-12-15 10:45:20.505696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.622 [2024-12-15 10:45:20.505820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff02 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.622 [2024-12-15 10:45:20.505837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:31.622 #44 NEW cov: 11782 ft: 15029 corp: 32/882b lim: 40 exec/s: 44 rss: 70Mb L: 40/40 MS: 1 InsertByte- 00:07:31.622 [2024-12-15 10:45:20.544519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0abdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.622 [2024-12-15 10:45:20.544545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.622 #45 NEW cov: 11782 ft: 15051 corp: 33/897b lim: 40 exec/s: 45 rss: 70Mb L: 15/40 MS: 1 ShuffleBytes- 00:07:31.622 [2024-12-15 10:45:20.584845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.622 [2024-12-15 10:45:20.584873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.622 [2024-12-15 10:45:20.585008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:bd0abdbd cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.622 [2024-12-15 10:45:20.585027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.622 #46 NEW cov: 11782 ft: 15071 corp: 34/920b lim: 40 exec/s: 46 rss: 70Mb L: 23/40 MS: 1 CopyPart- 00:07:31.622 [2024-12-15 10:45:20.624935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ea11500 cdw11:0000a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.622 [2024-12-15 10:45:20.624971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.622 [2024-12-15 10:45:20.625102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.622 [2024-12-15 10:45:20.625124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.881 #47 NEW cov: 11782 ft: 15081 corp: 35/942b lim: 40 exec/s: 47 rss: 70Mb L: 22/40 MS: 1 ChangeByte- 00:07:31.881 [2024-12-15 10:45:20.674853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:08a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.881 [2024-12-15 10:45:20.674883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.881 [2024-12-15 10:45:20.675005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.881 [2024-12-15 10:45:20.675022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.881 #48 NEW cov: 11782 ft: 15109 corp: 36/962b lim: 40 exec/s: 48 rss: 70Mb L: 20/40 MS: 1 ShuffleBytes- 00:07:31.881 [2024-12-15 10:45:20.725724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:08c0c0c0 cdw11:c0c0c0c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.881 [2024-12-15 10:45:20.725751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.881 [2024-12-15 10:45:20.725884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c0c025c0 cdw11:c0c0c0c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.881 [2024-12-15 10:45:20.725901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.881 [2024-12-15 10:45:20.726040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:c0c0c0a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.881 [2024-12-15 10:45:20.726059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.881 [2024-12-15 10:45:20.726188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.881 [2024-12-15 10:45:20.726207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.881 #49 NEW cov: 11782 ft: 15114 corp: 37/1001b lim: 40 exec/s: 49 rss: 70Mb L: 39/40 MS: 1 ChangeBit- 00:07:31.882 [2024-12-15 10:45:20.775876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.882 [2024-12-15 10:45:20.775903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.882 [2024-12-15 10:45:20.776032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:bd0abdbd cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.882 [2024-12-15 10:45:20.776050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.882 [2024-12-15 10:45:20.776176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.882 [2024-12-15 10:45:20.776195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.882 [2024-12-15 10:45:20.776320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.882 [2024-12-15 10:45:20.776340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.882 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:31.882 #50 NEW cov: 11805 ft: 15120 corp: 38/1040b lim: 40 exec/s: 50 rss: 70Mb L: 39/40 MS: 1 InsertRepeatedBytes- 00:07:31.882 [2024-12-15 10:45:20.826323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0abdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.882 [2024-12-15 10:45:20.826350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.882 [2024-12-15 10:45:20.826486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.882 [2024-12-15 10:45:20.826505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.882 [2024-12-15 10:45:20.826632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.882 [2024-12-15 10:45:20.826647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.882 [2024-12-15 10:45:20.826768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:bdbdbdbd cdw11:bdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.882 [2024-12-15 10:45:20.826786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.882 [2024-12-15 10:45:20.826915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffbdecbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.882 [2024-12-15 10:45:20.826933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:31.882 #51 NEW cov: 11805 ft: 15124 corp: 39/1080b lim: 40 exec/s: 51 rss: 70Mb L: 40/40 MS: 1 InsertByte- 00:07:31.882 [2024-12-15 10:45:20.875748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:08a1a1e1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.882 [2024-12-15 10:45:20.875775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.882 [2024-12-15 10:45:20.875912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.882 [2024-12-15 10:45:20.875928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.141 #52 NEW cov: 11805 ft: 15128 corp: 40/1101b lim: 40 exec/s: 52 rss: 70Mb L: 21/40 MS: 1 ChangeBit- 00:07:32.141 [2024-12-15 10:45:20.916563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.141 [2024-12-15 10:45:20.916591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.141 [2024-12-15 10:45:20.916717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.141 [2024-12-15 10:45:20.916735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.141 [2024-12-15 10:45:20.916864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.141 [2024-12-15 10:45:20.916884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.141 [2024-12-15 10:45:20.917028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.141 [2024-12-15 10:45:20.917045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.141 [2024-12-15 10:45:20.917168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff02 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.141 [2024-12-15 10:45:20.917186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.141 #53 NEW cov: 11805 ft: 15145 corp: 41/1141b lim: 40 exec/s: 26 rss: 70Mb L: 40/40 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:07:32.142 #53 DONE cov: 11805 ft: 15145 corp: 41/1141b lim: 40 exec/s: 26 rss: 70Mb 00:07:32.142 ###### Recommended dictionary. ###### 00:07:32.142 "\377\377\377\377" # Uses: 2 00:07:32.142 ###### End of recommended dictionary. ###### 00:07:32.142 Done 53 runs in 2 second(s) 00:07:32.142 10:45:21 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_13.conf 00:07:32.142 10:45:21 -- ../common.sh@72 -- # (( i++ )) 00:07:32.142 10:45:21 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.142 10:45:21 -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:32.142 10:45:21 -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:32.142 10:45:21 -- nvmf/run.sh@24 -- # local timen=1 00:07:32.142 10:45:21 -- nvmf/run.sh@25 -- # local core=0x1 00:07:32.142 10:45:21 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:32.142 10:45:21 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:32.142 10:45:21 -- nvmf/run.sh@29 -- # printf %02d 14 00:07:32.142 10:45:21 -- nvmf/run.sh@29 -- # port=4414 00:07:32.142 10:45:21 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:32.142 10:45:21 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:32.142 10:45:21 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:32.142 10:45:21 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 -r /var/tmp/spdk14.sock 00:07:32.142 [2024-12-15 10:45:21.119079] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.142 [2024-12-15 10:45:21.119142] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1313041 ] 00:07:32.401 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.401 [2024-12-15 10:45:21.387449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.660 [2024-12-15 10:45:21.469619] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:32.660 [2024-12-15 10:45:21.469763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.660 [2024-12-15 10:45:21.527533] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.660 [2024-12-15 10:45:21.543830] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:32.660 INFO: Running with entropic power schedule (0xFF, 100). 00:07:32.660 INFO: Seed: 2051156693 00:07:32.660 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:32.660 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:32.660 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:32.660 INFO: A corpus is not provided, starting from an empty corpus 00:07:32.660 #2 INITED exec/s: 0 rss: 60Mb 00:07:32.660 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:32.660 This may also happen if the target rejected all inputs we tried so far 00:07:32.660 [2024-12-15 10:45:21.621431] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.660 [2024-12-15 10:45:21.621471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.660 [2024-12-15 10:45:21.621554] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.660 [2024-12-15 10:45:21.621569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.919 NEW_FUNC[1/673]: 0x44e908 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:32.919 NEW_FUNC[2/673]: 0x46fd38 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:32.919 #6 NEW cov: 11604 ft: 11606 corp: 2/27b lim: 35 exec/s: 0 rss: 68Mb L: 26/26 MS: 4 ChangeBit-ChangeBit-CrossOver-InsertRepeatedBytes- 00:07:32.919 [2024-12-15 10:45:21.931533] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.919 [2024-12-15 10:45:21.931567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.919 [2024-12-15 10:45:21.931695] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.919 [2024-12-15 10:45:21.931711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.919 [2024-12-15 10:45:21.931839] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.919 [2024-12-15 10:45:21.931874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.178 #7 NEW cov: 11725 ft: 12585 corp: 3/57b lim: 35 exec/s: 0 rss: 68Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:33.178 [2024-12-15 10:45:21.981697] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.178 [2024-12-15 10:45:21.981726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.178 [2024-12-15 10:45:21.981858] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.178 [2024-12-15 10:45:21.981874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.178 [2024-12-15 10:45:21.981965] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.178 [2024-12-15 10:45:21.981986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.178 #8 NEW cov: 11731 ft: 12799 corp: 4/87b lim: 35 exec/s: 0 rss: 68Mb L: 30/30 MS: 1 ChangeBinInt- 00:07:33.178 [2024-12-15 10:45:22.021793] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.178 [2024-12-15 10:45:22.021823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.178 [2024-12-15 10:45:22.021959] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.178 [2024-12-15 10:45:22.021975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.178 [2024-12-15 10:45:22.022098] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.178 [2024-12-15 10:45:22.022124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.178 #9 NEW cov: 11816 ft: 13111 corp: 5/120b lim: 35 exec/s: 0 rss: 68Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:33.178 [2024-12-15 10:45:22.071647] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.178 [2024-12-15 10:45:22.071675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.178 [2024-12-15 10:45:22.071803] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.178 [2024-12-15 10:45:22.071820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.178 #10 NEW cov: 11816 ft: 13215 corp: 6/147b lim: 35 exec/s: 0 rss: 68Mb L: 27/33 MS: 1 EraseBytes- 00:07:33.178 [2024-12-15 10:45:22.111721] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.178 [2024-12-15 10:45:22.111747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.178 [2024-12-15 10:45:22.111872] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.178 [2024-12-15 10:45:22.111889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.178 #11 NEW cov: 11816 ft: 13365 corp: 7/173b lim: 35 exec/s: 0 rss: 68Mb L: 26/33 MS: 1 ChangeBit- 00:07:33.178 [2024-12-15 10:45:22.152072] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.178 [2024-12-15 10:45:22.152098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.178 [2024-12-15 10:45:22.152223] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.178 [2024-12-15 10:45:22.152240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.178 [2024-12-15 10:45:22.152361] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.178 [2024-12-15 10:45:22.152378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.178 [2024-12-15 10:45:22.152508] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.178 [2024-12-15 10:45:22.152531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.178 #17 NEW cov: 11816 ft: 13590 corp: 8/206b lim: 35 exec/s: 0 rss: 68Mb L: 33/33 MS: 1 CrossOver- 00:07:33.437 [2024-12-15 10:45:22.202405] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.437 [2024-12-15 10:45:22.202439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.437 [2024-12-15 10:45:22.202563] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.437 [2024-12-15 10:45:22.202580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.437 [2024-12-15 10:45:22.202702] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-12-15 10:45:22.202723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.438 #18 NEW cov: 11816 ft: 13690 corp: 9/237b lim: 35 exec/s: 0 rss: 68Mb L: 31/33 MS: 1 InsertByte- 00:07:33.438 [2024-12-15 10:45:22.241941] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-12-15 10:45:22.241968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.438 #19 NEW cov: 11816 ft: 13972 corp: 10/253b lim: 35 exec/s: 0 rss: 68Mb L: 16/33 MS: 1 EraseBytes- 00:07:33.438 [2024-12-15 10:45:22.282470] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-12-15 10:45:22.282505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.438 [2024-12-15 10:45:22.282739] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-12-15 10:45:22.282755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.438 [2024-12-15 10:45:22.282879] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-12-15 10:45:22.282897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.438 #20 NEW cov: 11816 ft: 14049 corp: 11/285b lim: 35 exec/s: 0 rss: 68Mb L: 32/33 MS: 1 InsertRepeatedBytes- 00:07:33.438 [2024-12-15 10:45:22.322371] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-12-15 10:45:22.322400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.438 [2024-12-15 10:45:22.322533] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-12-15 10:45:22.322554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.438 [2024-12-15 10:45:22.322680] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-12-15 10:45:22.322697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.438 #21 NEW cov: 11816 ft: 14089 corp: 12/307b lim: 35 exec/s: 0 rss: 68Mb L: 22/33 MS: 1 EraseBytes- 00:07:33.438 [2024-12-15 10:45:22.372821] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-12-15 10:45:22.372853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.438 [2024-12-15 10:45:22.372978] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-12-15 10:45:22.372997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.438 [2024-12-15 10:45:22.373128] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-12-15 10:45:22.373150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.438 #22 NEW cov: 11816 ft: 14124 corp: 13/338b lim: 35 exec/s: 0 rss: 68Mb L: 31/33 MS: 1 InsertByte- 00:07:33.438 [2024-12-15 10:45:22.412405] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-12-15 10:45:22.412438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.438 #23 NEW cov: 11816 ft: 14139 corp: 14/354b lim: 35 exec/s: 0 rss: 68Mb L: 16/33 MS: 1 EraseBytes- 00:07:33.697 [2024-12-15 10:45:22.453039] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.697 [2024-12-15 10:45:22.453068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.697 [2024-12-15 10:45:22.453254] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.697 [2024-12-15 10:45:22.453273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.697 [2024-12-15 10:45:22.453400] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.697 [2024-12-15 10:45:22.453419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.697 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:33.697 #24 NEW cov: 11839 ft: 14181 corp: 15/386b lim: 35 exec/s: 0 rss: 69Mb L: 32/33 MS: 1 CopyPart- 00:07:33.697 [2024-12-15 10:45:22.503242] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.697 [2024-12-15 10:45:22.503275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.697 [2024-12-15 10:45:22.503514] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.697 [2024-12-15 10:45:22.503530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.697 [2024-12-15 10:45:22.503663] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.697 [2024-12-15 10:45:22.503680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.697 #25 NEW cov: 11839 ft: 14196 corp: 16/419b lim: 35 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 InsertByte- 00:07:33.697 [2024-12-15 10:45:22.543365] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.697 [2024-12-15 10:45:22.543396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.697 [2024-12-15 10:45:22.543530] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.697 [2024-12-15 10:45:22.543549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.697 [2024-12-15 10:45:22.543685] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.697 [2024-12-15 10:45:22.543702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.697 #26 NEW cov: 11839 ft: 14218 corp: 17/447b lim: 35 exec/s: 0 rss: 69Mb L: 28/33 MS: 1 EraseBytes- 00:07:33.697 [2024-12-15 10:45:22.583313] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.697 [2024-12-15 10:45:22.583345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.697 [2024-12-15 10:45:22.583584] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.697 [2024-12-15 10:45:22.583604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.697 [2024-12-15 10:45:22.583737] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.697 [2024-12-15 10:45:22.583754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.697 #27 NEW cov: 11839 ft: 14237 corp: 18/479b lim: 35 exec/s: 27 rss: 69Mb L: 32/33 MS: 1 ChangeByte- 00:07:33.698 [2024-12-15 10:45:22.623490] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.698 [2024-12-15 10:45:22.623522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.698 [2024-12-15 10:45:22.623750] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.698 [2024-12-15 10:45:22.623769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.698 [2024-12-15 10:45:22.623907] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.698 [2024-12-15 10:45:22.623934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.698 #28 NEW cov: 11839 ft: 14256 corp: 19/512b lim: 35 exec/s: 28 rss: 69Mb L: 33/33 MS: 1 InsertByte- 00:07:33.698 [2024-12-15 10:45:22.673396] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.698 [2024-12-15 10:45:22.673429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.698 [2024-12-15 10:45:22.673555] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.698 [2024-12-15 10:45:22.673572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.698 [2024-12-15 10:45:22.673694] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.698 [2024-12-15 10:45:22.673715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.698 #29 NEW cov: 11839 ft: 14319 corp: 20/543b lim: 35 exec/s: 29 rss: 69Mb L: 31/33 MS: 1 ShuffleBytes- 00:07:33.957 [2024-12-15 10:45:22.723420] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.957 [2024-12-15 10:45:22.723449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.957 #30 NEW cov: 11839 ft: 14329 corp: 21/563b lim: 35 exec/s: 30 rss: 69Mb L: 20/33 MS: 1 EraseBytes- 00:07:33.957 [2024-12-15 10:45:22.763660] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.957 [2024-12-15 10:45:22.763686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.957 [2024-12-15 10:45:22.763847] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.957 [2024-12-15 10:45:22.763864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.957 #31 NEW cov: 11839 ft: 14348 corp: 22/589b lim: 35 exec/s: 31 rss: 69Mb L: 26/33 MS: 1 ShuffleBytes- 00:07:33.957 [2024-12-15 10:45:22.814406] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.957 [2024-12-15 10:45:22.814437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.957 [2024-12-15 10:45:22.814590] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.957 [2024-12-15 10:45:22.814608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.957 [2024-12-15 10:45:22.814745] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.957 [2024-12-15 10:45:22.814769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.957 [2024-12-15 10:45:22.814896] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.957 [2024-12-15 10:45:22.814919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.957 [2024-12-15 10:45:22.815066] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.957 [2024-12-15 10:45:22.815083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.957 #32 NEW cov: 11839 ft: 14474 corp: 23/624b lim: 35 exec/s: 32 rss: 69Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:33.957 [2024-12-15 10:45:22.864090] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.957 [2024-12-15 10:45:22.864118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.957 [2024-12-15 10:45:22.864264] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.957 [2024-12-15 10:45:22.864282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.957 #38 NEW cov: 11839 ft: 14478 corp: 24/650b lim: 35 exec/s: 38 rss: 69Mb L: 26/35 MS: 1 ChangeBit- 00:07:33.957 [2024-12-15 10:45:22.904337] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.957 [2024-12-15 10:45:22.904372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.957 [2024-12-15 10:45:22.904587] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.957 [2024-12-15 10:45:22.904605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.957 [2024-12-15 10:45:22.904729] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.957 [2024-12-15 10:45:22.904746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.957 #44 NEW cov: 11839 ft: 14528 corp: 25/683b lim: 35 exec/s: 44 rss: 69Mb L: 33/35 MS: 1 ChangeBit- 00:07:33.957 [2024-12-15 10:45:22.964529] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.957 [2024-12-15 10:45:22.964561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.957 [2024-12-15 10:45:22.964786] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.957 [2024-12-15 10:45:22.964802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.957 [2024-12-15 10:45:22.964934] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.957 [2024-12-15 10:45:22.964953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.216 #45 NEW cov: 11839 ft: 14601 corp: 26/716b lim: 35 exec/s: 45 rss: 69Mb L: 33/35 MS: 1 ChangeByte- 00:07:34.216 [2024-12-15 10:45:23.014253] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.216 [2024-12-15 10:45:23.014279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.216 #46 NEW cov: 11839 ft: 14646 corp: 27/735b lim: 35 exec/s: 46 rss: 69Mb L: 19/35 MS: 1 CopyPart- 00:07:34.217 [2024-12-15 10:45:23.054758] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.217 [2024-12-15 10:45:23.054789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.217 [2024-12-15 10:45:23.055011] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.217 [2024-12-15 10:45:23.055027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.217 [2024-12-15 10:45:23.055146] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES TIMESTAMP cid:7 cdw10:0000000e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.217 [2024-12-15 10:45:23.055163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.217 #47 NEW cov: 11839 ft: 14667 corp: 28/769b lim: 35 exec/s: 47 rss: 69Mb L: 34/35 MS: 1 InsertByte- 00:07:34.217 [2024-12-15 10:45:23.094998] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.217 [2024-12-15 10:45:23.095024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.217 [2024-12-15 10:45:23.095151] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.217 [2024-12-15 10:45:23.095167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.217 [2024-12-15 10:45:23.095293] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.217 [2024-12-15 10:45:23.095313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.217 #48 NEW cov: 11839 ft: 14709 corp: 29/800b lim: 35 exec/s: 48 rss: 69Mb L: 31/35 MS: 1 InsertByte- 00:07:34.217 [2024-12-15 10:45:23.135103] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.217 [2024-12-15 10:45:23.135129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.217 [2024-12-15 10:45:23.135259] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.217 [2024-12-15 10:45:23.135275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.217 [2024-12-15 10:45:23.135398] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.217 [2024-12-15 10:45:23.135420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.217 #49 NEW cov: 11839 ft: 14724 corp: 30/833b lim: 35 exec/s: 49 rss: 69Mb L: 33/35 MS: 1 CrossOver- 00:07:34.217 [2024-12-15 10:45:23.175096] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.217 [2024-12-15 10:45:23.175125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.217 [2024-12-15 10:45:23.175324] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.217 [2024-12-15 10:45:23.175340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.217 [2024-12-15 10:45:23.175468] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.217 [2024-12-15 10:45:23.175485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.217 #50 NEW cov: 11839 ft: 14732 corp: 31/866b lim: 35 exec/s: 50 rss: 69Mb L: 33/35 MS: 1 ShuffleBytes- 00:07:34.217 [2024-12-15 10:45:23.215294] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.217 [2024-12-15 10:45:23.215330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.217 [2024-12-15 10:45:23.215459] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.217 [2024-12-15 10:45:23.215476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.217 [2024-12-15 10:45:23.215573] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.217 [2024-12-15 10:45:23.215597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.476 #51 NEW cov: 11839 ft: 14740 corp: 32/897b lim: 35 exec/s: 51 rss: 69Mb L: 31/35 MS: 1 ChangeBit- 00:07:34.476 [2024-12-15 10:45:23.255432] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.476 [2024-12-15 10:45:23.255460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.476 [2024-12-15 10:45:23.255585] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.476 [2024-12-15 10:45:23.255605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.476 [2024-12-15 10:45:23.255716] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.476 [2024-12-15 10:45:23.255732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.476 #52 NEW cov: 11839 ft: 14741 corp: 33/931b lim: 35 exec/s: 52 rss: 69Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:07:34.476 [2024-12-15 10:45:23.295582] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.477 [2024-12-15 10:45:23.295608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.477 [2024-12-15 10:45:23.295733] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.477 [2024-12-15 10:45:23.295760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.477 [2024-12-15 10:45:23.295879] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.477 [2024-12-15 10:45:23.295902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.477 #53 NEW cov: 11839 ft: 14747 corp: 34/961b lim: 35 exec/s: 53 rss: 69Mb L: 30/35 MS: 1 ChangeBinInt- 00:07:34.477 [2024-12-15 10:45:23.335551] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.477 [2024-12-15 10:45:23.335586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.477 [2024-12-15 10:45:23.335813] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.477 [2024-12-15 10:45:23.335831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.477 [2024-12-15 10:45:23.335956] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.477 [2024-12-15 10:45:23.335972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.477 #54 NEW cov: 11839 ft: 14753 corp: 35/994b lim: 35 exec/s: 54 rss: 69Mb L: 33/35 MS: 1 ShuffleBytes- 00:07:34.477 [2024-12-15 10:45:23.375505] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.477 [2024-12-15 10:45:23.375531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.477 [2024-12-15 10:45:23.375663] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.477 [2024-12-15 10:45:23.375679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.477 #55 NEW cov: 11839 ft: 14767 corp: 36/1020b lim: 35 exec/s: 55 rss: 69Mb L: 26/35 MS: 1 ChangeBit- 00:07:34.477 [2024-12-15 10:45:23.415944] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.477 [2024-12-15 10:45:23.415970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.477 [2024-12-15 10:45:23.416093] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES LBA RANGE TYPE cid:6 cdw10:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.477 [2024-12-15 10:45:23.416110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.477 [2024-12-15 10:45:23.416223] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES LBA RANGE TYPE cid:7 cdw10:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.477 [2024-12-15 10:45:23.416240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.477 NEW_FUNC[1/1]: 0x46af78 in feat_lba_range_type /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:289 00:07:34.477 #56 NEW cov: 11850 ft: 14787 corp: 37/1054b lim: 35 exec/s: 56 rss: 70Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:07:34.477 [2024-12-15 10:45:23.455981] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.477 [2024-12-15 10:45:23.456014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.477 [2024-12-15 10:45:23.456245] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.477 [2024-12-15 10:45:23.456263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.477 [2024-12-15 10:45:23.456387] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.477 [2024-12-15 10:45:23.456407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.477 #57 NEW cov: 11850 ft: 14795 corp: 38/1088b lim: 35 exec/s: 57 rss: 70Mb L: 34/35 MS: 1 CrossOver- 00:07:34.736 [2024-12-15 10:45:23.506539] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.736 [2024-12-15 10:45:23.506569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.736 [2024-12-15 10:45:23.506695] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.736 [2024-12-15 10:45:23.506711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.736 [2024-12-15 10:45:23.506833] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.736 [2024-12-15 10:45:23.506848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.736 [2024-12-15 10:45:23.506972] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.736 [2024-12-15 10:45:23.506988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:34.736 #58 NEW cov: 11850 ft: 14827 corp: 39/1123b lim: 35 exec/s: 58 rss: 70Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:34.736 [2024-12-15 10:45:23.556312] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.736 [2024-12-15 10:45:23.556341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.736 [2024-12-15 10:45:23.556582] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.736 [2024-12-15 10:45:23.556601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.736 [2024-12-15 10:45:23.556740] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.736 [2024-12-15 10:45:23.556757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.736 #59 NEW cov: 11850 ft: 14831 corp: 40/1156b lim: 35 exec/s: 59 rss: 70Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:07:34.736 [2024-12-15 10:45:23.596251] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.736 [2024-12-15 10:45:23.596279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.736 [2024-12-15 10:45:23.596403] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.736 [2024-12-15 10:45:23.596424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.736 #60 NEW cov: 11850 ft: 14853 corp: 41/1182b lim: 35 exec/s: 30 rss: 70Mb L: 26/35 MS: 1 ChangeByte- 00:07:34.736 #60 DONE cov: 11850 ft: 14853 corp: 41/1182b lim: 35 exec/s: 30 rss: 70Mb 00:07:34.736 Done 60 runs in 2 second(s) 00:07:34.736 10:45:23 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_14.conf 00:07:34.736 10:45:23 -- ../common.sh@72 -- # (( i++ )) 00:07:34.736 10:45:23 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:34.736 10:45:23 -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:34.736 10:45:23 -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:34.736 10:45:23 -- nvmf/run.sh@24 -- # local timen=1 00:07:34.736 10:45:23 -- nvmf/run.sh@25 -- # local core=0x1 00:07:34.736 10:45:23 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:34.736 10:45:23 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:34.736 10:45:23 -- nvmf/run.sh@29 -- # printf %02d 15 00:07:34.736 10:45:23 -- nvmf/run.sh@29 -- # port=4415 00:07:34.736 10:45:23 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:34.996 10:45:23 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:34.996 10:45:23 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:34.996 10:45:23 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 -r /var/tmp/spdk15.sock 00:07:34.996 [2024-12-15 10:45:23.781986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:34.996 [2024-12-15 10:45:23.782050] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1313386 ] 00:07:34.996 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.255 [2024-12-15 10:45:24.035838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.255 [2024-12-15 10:45:24.116259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:35.255 [2024-12-15 10:45:24.116400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.255 [2024-12-15 10:45:24.174318] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.255 [2024-12-15 10:45:24.190642] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:35.255 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.255 INFO: Seed: 401193154 00:07:35.255 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:35.255 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:35.255 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:35.255 INFO: A corpus is not provided, starting from an empty corpus 00:07:35.255 #2 INITED exec/s: 0 rss: 60Mb 00:07:35.255 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:35.255 This may also happen if the target rejected all inputs we tried so far 00:07:35.255 [2024-12-15 10:45:24.240335] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.255 [2024-12-15 10:45:24.240362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.255 [2024-12-15 10:45:24.240425] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.255 [2024-12-15 10:45:24.240439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.255 [2024-12-15 10:45:24.240496] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.255 [2024-12-15 10:45:24.240509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.515 NEW_FUNC[1/671]: 0x44fe48 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:35.515 NEW_FUNC[2/671]: 0x46fd38 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:35.515 #4 NEW cov: 11574 ft: 11564 corp: 2/29b lim: 35 exec/s: 0 rss: 68Mb L: 28/28 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:35.774 #5 NEW cov: 11687 ft: 12678 corp: 3/41b lim: 35 exec/s: 0 rss: 68Mb L: 12/28 MS: 1 InsertRepeatedBytes- 00:07:35.774 [2024-12-15 10:45:24.580952] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.774 [2024-12-15 10:45:24.580985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.774 [2024-12-15 10:45:24.581104] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.774 [2024-12-15 10:45:24.581118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.774 [2024-12-15 10:45:24.581174] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.774 [2024-12-15 10:45:24.581186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.774 NEW_FUNC[1/1]: 0x473ca8 in feat_keep_alive_timer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:364 00:07:35.774 #6 NEW cov: 11712 ft: 12855 corp: 4/69b lim: 35 exec/s: 0 rss: 69Mb L: 28/28 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\017"- 00:07:35.774 [2024-12-15 10:45:24.621031] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.774 [2024-12-15 10:45:24.621058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.774 [2024-12-15 10:45:24.621117] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.774 [2024-12-15 10:45:24.621130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.774 [2024-12-15 10:45:24.621186] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.774 [2024-12-15 10:45:24.621199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.774 [2024-12-15 10:45:24.621255] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.774 [2024-12-15 10:45:24.621268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.774 #7 NEW cov: 11797 ft: 13211 corp: 5/98b lim: 35 exec/s: 0 rss: 69Mb L: 29/29 MS: 1 InsertByte- 00:07:35.774 [2024-12-15 10:45:24.661148] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.774 [2024-12-15 10:45:24.661173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.775 [2024-12-15 10:45:24.661229] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.775 [2024-12-15 10:45:24.661243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.775 [2024-12-15 10:45:24.661301] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.775 [2024-12-15 10:45:24.661314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.775 [2024-12-15 10:45:24.661371] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.775 [2024-12-15 10:45:24.661384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.775 #8 NEW cov: 11797 ft: 13355 corp: 6/127b lim: 35 exec/s: 0 rss: 69Mb L: 29/29 MS: 1 ChangeByte- 00:07:35.775 [2024-12-15 10:45:24.700991] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000741 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.775 [2024-12-15 10:45:24.701015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.775 [2024-12-15 10:45:24.701075] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.775 [2024-12-15 10:45:24.701091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.775 #13 NEW cov: 11797 ft: 13741 corp: 7/141b lim: 35 exec/s: 0 rss: 69Mb L: 14/29 MS: 5 CrossOver-CrossOver-InsertByte-CrossOver-PersAutoDict- DE: "\000\000\000\000\000\000\000\017"- 00:07:35.775 [2024-12-15 10:45:24.741103] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000741 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.775 [2024-12-15 10:45:24.741129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.775 [2024-12-15 10:45:24.741190] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.775 [2024-12-15 10:45:24.741203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.775 #14 NEW cov: 11797 ft: 13793 corp: 8/155b lim: 35 exec/s: 0 rss: 69Mb L: 14/29 MS: 1 CopyPart- 00:07:35.775 [2024-12-15 10:45:24.781091] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000741 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.775 [2024-12-15 10:45:24.781115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.034 #15 NEW cov: 11797 ft: 13958 corp: 9/167b lim: 35 exec/s: 0 rss: 69Mb L: 12/29 MS: 1 EraseBytes- 00:07:36.034 [2024-12-15 10:45:24.821355] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000741 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.034 [2024-12-15 10:45:24.821380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.034 [2024-12-15 10:45:24.821440] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.034 [2024-12-15 10:45:24.821454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.034 #16 NEW cov: 11797 ft: 13984 corp: 10/181b lim: 35 exec/s: 0 rss: 69Mb L: 14/29 MS: 1 ChangeBit- 00:07:36.034 [2024-12-15 10:45:24.861704] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.034 [2024-12-15 10:45:24.861729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.034 [2024-12-15 10:45:24.861786] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.034 [2024-12-15 10:45:24.861799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.034 [2024-12-15 10:45:24.861855] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.034 [2024-12-15 10:45:24.861885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.034 [2024-12-15 10:45:24.861943] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.034 [2024-12-15 10:45:24.861957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.034 #17 NEW cov: 11797 ft: 14097 corp: 11/210b lim: 35 exec/s: 0 rss: 69Mb L: 29/29 MS: 1 ChangeByte- 00:07:36.034 [2024-12-15 10:45:24.901838] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.034 [2024-12-15 10:45:24.901862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.034 [2024-12-15 10:45:24.901920] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.034 [2024-12-15 10:45:24.901936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.034 [2024-12-15 10:45:24.901993] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.034 [2024-12-15 10:45:24.902006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.034 [2024-12-15 10:45:24.902061] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.034 [2024-12-15 10:45:24.902074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.034 #18 NEW cov: 11797 ft: 14112 corp: 12/240b lim: 35 exec/s: 0 rss: 69Mb L: 30/30 MS: 1 InsertByte- 00:07:36.034 [2024-12-15 10:45:24.941686] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000741 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.034 [2024-12-15 10:45:24.941710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.034 [2024-12-15 10:45:24.941768] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.034 [2024-12-15 10:45:24.941782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.034 #19 NEW cov: 11797 ft: 14120 corp: 13/255b lim: 35 exec/s: 0 rss: 69Mb L: 15/30 MS: 1 InsertByte- 00:07:36.034 [2024-12-15 10:45:24.982076] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.034 [2024-12-15 10:45:24.982100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.034 [2024-12-15 10:45:24.982215] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.034 [2024-12-15 10:45:24.982228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.034 [2024-12-15 10:45:24.982285] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.034 [2024-12-15 10:45:24.982298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.034 #20 NEW cov: 11797 ft: 14222 corp: 14/283b lim: 35 exec/s: 0 rss: 69Mb L: 28/30 MS: 1 CopyPart- 00:07:36.034 [2024-12-15 10:45:25.022163] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.034 [2024-12-15 10:45:25.022187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.035 [2024-12-15 10:45:25.022304] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.035 [2024-12-15 10:45:25.022319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.035 [2024-12-15 10:45:25.022375] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.035 [2024-12-15 10:45:25.022389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.293 #21 NEW cov: 11797 ft: 14266 corp: 15/311b lim: 35 exec/s: 0 rss: 69Mb L: 28/30 MS: 1 ChangeByte- 00:07:36.293 [2024-12-15 10:45:25.062284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.062309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.293 [2024-12-15 10:45:25.062385] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.062399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.293 [2024-12-15 10:45:25.062461] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000721 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.062475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.293 [2024-12-15 10:45:25.062531] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.062545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.293 #22 NEW cov: 11797 ft: 14283 corp: 16/341b lim: 35 exec/s: 0 rss: 69Mb L: 30/30 MS: 1 CopyPart- 00:07:36.293 [2024-12-15 10:45:25.102148] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000741 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.102171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.293 [2024-12-15 10:45:25.102229] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.102242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.293 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:36.293 #23 NEW cov: 11820 ft: 14296 corp: 17/356b lim: 35 exec/s: 0 rss: 70Mb L: 15/30 MS: 1 CrossOver- 00:07:36.293 [2024-12-15 10:45:25.142629] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000741 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.142653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.293 [2024-12-15 10:45:25.142711] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000136 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.142724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.293 [2024-12-15 10:45:25.142797] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000136 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.142811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.293 [2024-12-15 10:45:25.142866] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000136 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.142879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.293 [2024-12-15 10:45:25.142938] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.142951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.293 #24 NEW cov: 11820 ft: 14423 corp: 18/391b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:36.293 [2024-12-15 10:45:25.182353] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000741 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.182377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.293 [2024-12-15 10:45:25.182437] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.182453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.293 #25 NEW cov: 11820 ft: 14440 corp: 19/405b lim: 35 exec/s: 0 rss: 70Mb L: 14/35 MS: 1 ChangeBinInt- 00:07:36.293 [2024-12-15 10:45:25.222483] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000741 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.222508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.293 [2024-12-15 10:45:25.222566] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.222580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.293 #26 NEW cov: 11820 ft: 14495 corp: 20/420b lim: 35 exec/s: 26 rss: 70Mb L: 15/35 MS: 1 ChangeBinInt- 00:07:36.293 [2024-12-15 10:45:25.262872] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.262898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.293 [2024-12-15 10:45:25.262957] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.262971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.293 [2024-12-15 10:45:25.263028] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.263041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.293 [2024-12-15 10:45:25.263098] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.293 [2024-12-15 10:45:25.263112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.293 #27 NEW cov: 11820 ft: 14520 corp: 21/454b lim: 35 exec/s: 27 rss: 70Mb L: 34/35 MS: 1 CrossOver- 00:07:36.294 [2024-12-15 10:45:25.302739] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000741 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.294 [2024-12-15 10:45:25.302765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.294 [2024-12-15 10:45:25.302823] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.294 [2024-12-15 10:45:25.302838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.553 #28 NEW cov: 11820 ft: 14525 corp: 22/469b lim: 35 exec/s: 28 rss: 70Mb L: 15/35 MS: 1 ChangeBit- 00:07:36.553 [2024-12-15 10:45:25.343061] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.343085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.553 [2024-12-15 10:45:25.343144] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.343157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.553 [2024-12-15 10:45:25.343215] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.343228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.553 [2024-12-15 10:45:25.343287] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.343299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.553 #29 NEW cov: 11820 ft: 14550 corp: 23/503b lim: 35 exec/s: 29 rss: 70Mb L: 34/35 MS: 1 CopyPart- 00:07:36.553 [2024-12-15 10:45:25.383212] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.383236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.553 [2024-12-15 10:45:25.383293] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.383306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.553 [2024-12-15 10:45:25.383362] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000000de SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.383375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.553 [2024-12-15 10:45:25.383435] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.383448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.553 #30 NEW cov: 11820 ft: 14563 corp: 24/533b lim: 35 exec/s: 30 rss: 70Mb L: 30/35 MS: 1 ChangeBinInt- 00:07:36.553 [2024-12-15 10:45:25.423309] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.423333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.553 [2024-12-15 10:45:25.423450] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.423465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.553 [2024-12-15 10:45:25.423537] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000004d3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.423551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.553 #31 NEW cov: 11820 ft: 14609 corp: 25/561b lim: 35 exec/s: 31 rss: 70Mb L: 28/35 MS: 1 CMP- DE: "\252\204\344\311\323\215\004\000"- 00:07:36.553 [2024-12-15 10:45:25.463199] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000741 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.463223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.553 [2024-12-15 10:45:25.463282] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.463295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.553 #32 NEW cov: 11820 ft: 14624 corp: 26/576b lim: 35 exec/s: 32 rss: 70Mb L: 15/35 MS: 1 ChangeBit- 00:07:36.553 [2024-12-15 10:45:25.503563] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.503588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.553 [2024-12-15 10:45:25.503649] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.503663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.553 [2024-12-15 10:45:25.503717] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.503731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.553 [2024-12-15 10:45:25.503787] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.503801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.553 #33 NEW cov: 11820 ft: 14632 corp: 27/605b lim: 35 exec/s: 33 rss: 70Mb L: 29/35 MS: 1 ChangeBinInt- 00:07:36.553 [2024-12-15 10:45:25.543566] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.543591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.553 [2024-12-15 10:45:25.543709] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.553 [2024-12-15 10:45:25.543724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.814 #34 NEW cov: 11820 ft: 14694 corp: 28/632b lim: 35 exec/s: 34 rss: 70Mb L: 27/35 MS: 1 EraseBytes- 00:07:36.814 [2024-12-15 10:45:25.583808] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.583833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.814 [2024-12-15 10:45:25.583892] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.583906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.814 [2024-12-15 10:45:25.583962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.583976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.814 [2024-12-15 10:45:25.584031] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.584045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.814 #35 NEW cov: 11820 ft: 14705 corp: 29/664b lim: 35 exec/s: 35 rss: 70Mb L: 32/35 MS: 1 InsertRepeatedBytes- 00:07:36.814 [2024-12-15 10:45:25.624067] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000041 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.624092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.814 [2024-12-15 10:45:25.624148] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.624161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.814 [2024-12-15 10:45:25.624218] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.624231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.814 [2024-12-15 10:45:25.624290] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.624303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.814 [2024-12-15 10:45:25.624361] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.624374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.814 #36 NEW cov: 11820 ft: 14761 corp: 30/699b lim: 35 exec/s: 36 rss: 70Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:36.814 [2024-12-15 10:45:25.664055] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.664079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.814 [2024-12-15 10:45:25.664136] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.664150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.814 [2024-12-15 10:45:25.664204] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.664218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.814 [2024-12-15 10:45:25.664274] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.664287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.814 #37 NEW cov: 11820 ft: 14769 corp: 31/733b lim: 35 exec/s: 37 rss: 70Mb L: 34/35 MS: 1 ShuffleBytes- 00:07:36.814 [2024-12-15 10:45:25.704044] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000741 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.704068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.814 [2024-12-15 10:45:25.704126] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.704140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.814 [2024-12-15 10:45:25.704196] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.704225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.814 #38 NEW cov: 11820 ft: 14788 corp: 32/757b lim: 35 exec/s: 38 rss: 70Mb L: 24/35 MS: 1 CrossOver- 00:07:36.814 [2024-12-15 10:45:25.744151] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.744176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.814 [2024-12-15 10:45:25.744234] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.744248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.814 [2024-12-15 10:45:25.744304] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.744320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.814 #44 NEW cov: 11820 ft: 14790 corp: 33/781b lim: 35 exec/s: 44 rss: 70Mb L: 24/35 MS: 1 InsertRepeatedBytes- 00:07:36.814 #45 NEW cov: 11820 ft: 14884 corp: 34/793b lim: 35 exec/s: 45 rss: 70Mb L: 12/35 MS: 1 ChangeByte- 00:07:36.814 [2024-12-15 10:45:25.824418] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.824443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.814 [2024-12-15 10:45:25.824502] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.824516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.814 [2024-12-15 10:45:25.824575] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.814 [2024-12-15 10:45:25.824588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.074 #46 NEW cov: 11820 ft: 14902 corp: 35/815b lim: 35 exec/s: 46 rss: 70Mb L: 22/35 MS: 1 EraseBytes- 00:07:37.074 [2024-12-15 10:45:25.864630] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.074 [2024-12-15 10:45:25.864655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.074 [2024-12-15 10:45:25.864713] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000025e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.074 [2024-12-15 10:45:25.864726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.074 [2024-12-15 10:45:25.864783] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.074 [2024-12-15 10:45:25.864797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.074 [2024-12-15 10:45:25.864853] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.074 [2024-12-15 10:45:25.864865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.074 #47 NEW cov: 11820 ft: 14908 corp: 36/849b lim: 35 exec/s: 47 rss: 70Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:07:37.074 [2024-12-15 10:45:25.904528] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.074 [2024-12-15 10:45:25.904553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.074 #48 NEW cov: 11820 ft: 14925 corp: 37/866b lim: 35 exec/s: 48 rss: 70Mb L: 17/35 MS: 1 EraseBytes- 00:07:37.074 [2024-12-15 10:45:25.944612] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000041 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.074 [2024-12-15 10:45:25.944636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.074 [2024-12-15 10:45:25.944694] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.074 [2024-12-15 10:45:25.944708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.074 #49 NEW cov: 11820 ft: 14943 corp: 38/881b lim: 35 exec/s: 49 rss: 70Mb L: 15/35 MS: 1 ShuffleBytes- 00:07:37.074 [2024-12-15 10:45:25.984879] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000741 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.074 [2024-12-15 10:45:25.984906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.074 [2024-12-15 10:45:25.984981] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000136 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.074 [2024-12-15 10:45:25.984995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.074 [2024-12-15 10:45:25.985055] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000136 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.074 [2024-12-15 10:45:25.985069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.074 #50 NEW cov: 11820 ft: 14976 corp: 39/908b lim: 35 exec/s: 50 rss: 70Mb L: 27/35 MS: 1 EraseBytes- 00:07:37.074 [2024-12-15 10:45:26.024863] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.074 [2024-12-15 10:45:26.024887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.074 #51 NEW cov: 11820 ft: 14982 corp: 40/925b lim: 35 exec/s: 51 rss: 70Mb L: 17/35 MS: 1 ChangeBinInt- 00:07:37.074 [2024-12-15 10:45:26.065335] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000741 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.074 [2024-12-15 10:45:26.065359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.074 [2024-12-15 10:45:26.065437] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000136 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.074 [2024-12-15 10:45:26.065452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.074 [2024-12-15 10:45:26.065511] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000136 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.074 [2024-12-15 10:45:26.065524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.074 [2024-12-15 10:45:26.065582] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000136 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.074 [2024-12-15 10:45:26.065595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.074 [2024-12-15 10:45:26.065653] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.074 [2024-12-15 10:45:26.065666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.334 #52 NEW cov: 11820 ft: 14994 corp: 41/960b lim: 35 exec/s: 52 rss: 70Mb L: 35/35 MS: 1 ChangeByte- 00:07:37.334 [2024-12-15 10:45:26.105322] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.334 [2024-12-15 10:45:26.105345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.334 [2024-12-15 10:45:26.105465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.334 [2024-12-15 10:45:26.105480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.334 [2024-12-15 10:45:26.105539] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.334 [2024-12-15 10:45:26.105553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.334 #53 NEW cov: 11820 ft: 15010 corp: 42/992b lim: 35 exec/s: 53 rss: 70Mb L: 32/35 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:37.334 [2024-12-15 10:45:26.145465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.334 [2024-12-15 10:45:26.145489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.334 [2024-12-15 10:45:26.145601] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.334 [2024-12-15 10:45:26.145615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.334 [2024-12-15 10:45:26.145674] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000004aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.334 [2024-12-15 10:45:26.145687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.334 #54 NEW cov: 11820 ft: 15017 corp: 43/1024b lim: 35 exec/s: 54 rss: 70Mb L: 32/35 MS: 1 InsertRepeatedBytes- 00:07:37.334 [2024-12-15 10:45:26.185531] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.334 [2024-12-15 10:45:26.185555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.334 [2024-12-15 10:45:26.185614] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.334 [2024-12-15 10:45:26.185627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.334 [2024-12-15 10:45:26.185685] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.334 [2024-12-15 10:45:26.185698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.334 [2024-12-15 10:45:26.185754] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.334 [2024-12-15 10:45:26.185767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.334 #55 NEW cov: 11820 ft: 15027 corp: 44/1054b lim: 35 exec/s: 55 rss: 70Mb L: 30/35 MS: 1 CopyPart- 00:07:37.334 [2024-12-15 10:45:26.225675] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.334 [2024-12-15 10:45:26.225699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.334 [2024-12-15 10:45:26.225758] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.334 [2024-12-15 10:45:26.225772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.334 [2024-12-15 10:45:26.225828] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.334 [2024-12-15 10:45:26.225842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.334 [2024-12-15 10:45:26.225897] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.334 [2024-12-15 10:45:26.225910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.334 #56 NEW cov: 11820 ft: 15039 corp: 45/1083b lim: 35 exec/s: 28 rss: 70Mb L: 29/35 MS: 1 ChangeByte- 00:07:37.334 #56 DONE cov: 11820 ft: 15039 corp: 45/1083b lim: 35 exec/s: 28 rss: 70Mb 00:07:37.334 ###### Recommended dictionary. ###### 00:07:37.334 "\000\000\000\000\000\000\000\017" # Uses: 1 00:07:37.334 "\252\204\344\311\323\215\004\000" # Uses: 0 00:07:37.334 "\000\000\000\000" # Uses: 0 00:07:37.334 ###### End of recommended dictionary. ###### 00:07:37.334 Done 56 runs in 2 second(s) 00:07:37.647 10:45:26 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_15.conf 00:07:37.647 10:45:26 -- ../common.sh@72 -- # (( i++ )) 00:07:37.647 10:45:26 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:37.647 10:45:26 -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:37.647 10:45:26 -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:37.647 10:45:26 -- nvmf/run.sh@24 -- # local timen=1 00:07:37.647 10:45:26 -- nvmf/run.sh@25 -- # local core=0x1 00:07:37.647 10:45:26 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:37.647 10:45:26 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:37.647 10:45:26 -- nvmf/run.sh@29 -- # printf %02d 16 00:07:37.647 10:45:26 -- nvmf/run.sh@29 -- # port=4416 00:07:37.647 10:45:26 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:37.647 10:45:26 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:37.647 10:45:26 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:37.647 10:45:26 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 -r /var/tmp/spdk16.sock 00:07:37.647 [2024-12-15 10:45:26.411609] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.647 [2024-12-15 10:45:26.411678] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1313892 ] 00:07:37.647 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.970 [2024-12-15 10:45:26.674886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.970 [2024-12-15 10:45:26.756594] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:37.971 [2024-12-15 10:45:26.756733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.971 [2024-12-15 10:45:26.814465] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.971 [2024-12-15 10:45:26.830769] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:37.971 INFO: Running with entropic power schedule (0xFF, 100). 00:07:37.971 INFO: Seed: 3041181844 00:07:37.971 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:37.971 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:37.971 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:37.971 INFO: A corpus is not provided, starting from an empty corpus 00:07:37.971 #2 INITED exec/s: 0 rss: 60Mb 00:07:37.971 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:37.971 This may also happen if the target rejected all inputs we tried so far 00:07:37.971 [2024-12-15 10:45:26.901268] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.971 [2024-12-15 10:45:26.901309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.971 [2024-12-15 10:45:26.901384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.971 [2024-12-15 10:45:26.901402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.230 NEW_FUNC[1/670]: 0x451308 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:38.230 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:38.230 #29 NEW cov: 11657 ft: 11658 corp: 2/48b lim: 105 exec/s: 0 rss: 68Mb L: 47/47 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:38.230 [2024-12-15 10:45:27.232053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446702124948520959 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.230 [2024-12-15 10:45:27.232102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.230 [2024-12-15 10:45:27.232236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.230 [2024-12-15 10:45:27.232257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.230 [2024-12-15 10:45:27.232397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.230 [2024-12-15 10:45:27.232429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.230 [2024-12-15 10:45:27.232554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.230 [2024-12-15 10:45:27.232580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.489 NEW_FUNC[1/1]: 0xe94458 in spdk_process_is_primary /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:290 00:07:38.489 #31 NEW cov: 11776 ft: 12748 corp: 3/138b lim: 105 exec/s: 0 rss: 68Mb L: 90/90 MS: 2 CMP-InsertRepeatedBytes- DE: "\377\377\377\377\377\377\377\377"- 00:07:38.489 [2024-12-15 10:45:27.281920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446702124948520959 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.489 [2024-12-15 10:45:27.281954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.489 [2024-12-15 10:45:27.282055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.489 [2024-12-15 10:45:27.282074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.489 [2024-12-15 10:45:27.282193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.489 [2024-12-15 10:45:27.282218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.489 [2024-12-15 10:45:27.282337] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.489 [2024-12-15 10:45:27.282360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.489 #32 NEW cov: 11782 ft: 13070 corp: 4/228b lim: 105 exec/s: 0 rss: 69Mb L: 90/90 MS: 1 ChangeByte- 00:07:38.489 [2024-12-15 10:45:27.322009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446702124948520959 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.489 [2024-12-15 10:45:27.322046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.489 [2024-12-15 10:45:27.322162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.489 [2024-12-15 10:45:27.322184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.489 [2024-12-15 10:45:27.322316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.489 [2024-12-15 10:45:27.322338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.489 [2024-12-15 10:45:27.322462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.489 [2024-12-15 10:45:27.322488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.489 #33 NEW cov: 11867 ft: 13320 corp: 5/318b lim: 105 exec/s: 0 rss: 69Mb L: 90/90 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:38.489 [2024-12-15 10:45:27.361846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.489 [2024-12-15 10:45:27.361873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.489 [2024-12-15 10:45:27.362005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.489 [2024-12-15 10:45:27.362023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.489 #34 NEW cov: 11867 ft: 13455 corp: 6/365b lim: 105 exec/s: 0 rss: 69Mb L: 47/90 MS: 1 ChangeByte- 00:07:38.489 [2024-12-15 10:45:27.402188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446702124948520959 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.489 [2024-12-15 10:45:27.402217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.489 [2024-12-15 10:45:27.402312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.489 [2024-12-15 10:45:27.402336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.489 [2024-12-15 10:45:27.402463] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.489 [2024-12-15 10:45:27.402504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.489 [2024-12-15 10:45:27.402638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.489 [2024-12-15 10:45:27.402661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.489 #35 NEW cov: 11867 ft: 13519 corp: 7/455b lim: 105 exec/s: 0 rss: 69Mb L: 90/90 MS: 1 ChangeBinInt- 00:07:38.490 [2024-12-15 10:45:27.442330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.490 [2024-12-15 10:45:27.442361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.490 [2024-12-15 10:45:27.442472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.490 [2024-12-15 10:45:27.442495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.490 [2024-12-15 10:45:27.442618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.490 [2024-12-15 10:45:27.442634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.490 [2024-12-15 10:45:27.442763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.490 [2024-12-15 10:45:27.442781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.490 #37 NEW cov: 11867 ft: 13555 corp: 8/553b lim: 105 exec/s: 0 rss: 69Mb L: 98/98 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:38.490 [2024-12-15 10:45:27.482542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446702124948520959 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.490 [2024-12-15 10:45:27.482569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.490 [2024-12-15 10:45:27.482636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.490 [2024-12-15 10:45:27.482660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.490 [2024-12-15 10:45:27.482791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.490 [2024-12-15 10:45:27.482810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.490 [2024-12-15 10:45:27.482936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.490 [2024-12-15 10:45:27.482958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.749 #38 NEW cov: 11867 ft: 13569 corp: 9/643b lim: 105 exec/s: 0 rss: 69Mb L: 90/98 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:07:38.749 [2024-12-15 10:45:27.522041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.749 [2024-12-15 10:45:27.522072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.749 [2024-12-15 10:45:27.522189] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.749 [2024-12-15 10:45:27.522212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.749 [2024-12-15 10:45:27.522335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23339 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.749 [2024-12-15 10:45:27.522360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.749 #39 NEW cov: 11867 ft: 13864 corp: 10/712b lim: 105 exec/s: 0 rss: 69Mb L: 69/98 MS: 1 CrossOver- 00:07:38.749 [2024-12-15 10:45:27.562701] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446702124948520959 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.749 [2024-12-15 10:45:27.562735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.749 [2024-12-15 10:45:27.562836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.749 [2024-12-15 10:45:27.562858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.749 [2024-12-15 10:45:27.562973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.749 [2024-12-15 10:45:27.563001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.749 [2024-12-15 10:45:27.563126] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.749 [2024-12-15 10:45:27.563149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.749 #40 NEW cov: 11867 ft: 13888 corp: 11/802b lim: 105 exec/s: 0 rss: 69Mb L: 90/98 MS: 1 ChangeBit- 00:07:38.749 [2024-12-15 10:45:27.602793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.749 [2024-12-15 10:45:27.602826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.749 [2024-12-15 10:45:27.602931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.749 [2024-12-15 10:45:27.602958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.749 [2024-12-15 10:45:27.603079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.749 [2024-12-15 10:45:27.603102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.749 [2024-12-15 10:45:27.603219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.749 [2024-12-15 10:45:27.603241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.749 #46 NEW cov: 11867 ft: 13927 corp: 12/900b lim: 105 exec/s: 0 rss: 69Mb L: 98/98 MS: 1 ChangeByte- 00:07:38.749 [2024-12-15 10:45:27.652946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446702124948520959 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.749 [2024-12-15 10:45:27.652975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.749 [2024-12-15 10:45:27.653075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.749 [2024-12-15 10:45:27.653097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.749 [2024-12-15 10:45:27.653217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.749 [2024-12-15 10:45:27.653239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.749 [2024-12-15 10:45:27.653361] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.749 [2024-12-15 10:45:27.653383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.749 #47 NEW cov: 11867 ft: 13949 corp: 13/990b lim: 105 exec/s: 0 rss: 69Mb L: 90/98 MS: 1 ShuffleBytes- 00:07:38.750 [2024-12-15 10:45:27.692679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.750 [2024-12-15 10:45:27.692706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.750 [2024-12-15 10:45:27.692824] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6582955731027230555 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.750 [2024-12-15 10:45:27.692849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.750 #48 NEW cov: 11867 ft: 13985 corp: 14/1045b lim: 105 exec/s: 0 rss: 69Mb L: 55/98 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:07:38.750 [2024-12-15 10:45:27.733173] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18436006075833516031 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.750 [2024-12-15 10:45:27.733203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.750 [2024-12-15 10:45:27.733318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446702129058938879 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.750 [2024-12-15 10:45:27.733354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.750 [2024-12-15 10:45:27.733479] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.750 [2024-12-15 10:45:27.733503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.750 [2024-12-15 10:45:27.733625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817503731931609 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.750 [2024-12-15 10:45:27.733648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.750 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:38.750 #49 NEW cov: 11890 ft: 14015 corp: 15/1142b lim: 105 exec/s: 0 rss: 69Mb L: 97/98 MS: 1 CrossOver- 00:07:39.009 [2024-12-15 10:45:27.782998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.009 [2024-12-15 10:45:27.783031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.009 [2024-12-15 10:45:27.783153] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6569444929382865755 len:33372 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.009 [2024-12-15 10:45:27.783181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.009 #50 NEW cov: 11890 ft: 14032 corp: 16/1190b lim: 105 exec/s: 0 rss: 69Mb L: 48/98 MS: 1 InsertByte- 00:07:39.009 [2024-12-15 10:45:27.823304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.009 [2024-12-15 10:45:27.823333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.009 [2024-12-15 10:45:27.823420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6582956269430856539 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.009 [2024-12-15 10:45:27.823458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.009 [2024-12-15 10:45:27.823589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.009 [2024-12-15 10:45:27.823611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.009 [2024-12-15 10:45:27.823730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.009 [2024-12-15 10:45:27.823754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.009 #51 NEW cov: 11890 ft: 14058 corp: 17/1287b lim: 105 exec/s: 0 rss: 69Mb L: 97/98 MS: 1 CrossOver- 00:07:39.009 [2024-12-15 10:45:27.863468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23514 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.009 [2024-12-15 10:45:27.863497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.009 [2024-12-15 10:45:27.863604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.009 [2024-12-15 10:45:27.863628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.009 [2024-12-15 10:45:27.863747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.009 [2024-12-15 10:45:27.863772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.009 [2024-12-15 10:45:27.863897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.009 [2024-12-15 10:45:27.863916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.009 #52 NEW cov: 11890 ft: 14088 corp: 18/1386b lim: 105 exec/s: 52 rss: 70Mb L: 99/99 MS: 1 CopyPart- 00:07:39.009 [2024-12-15 10:45:27.903359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955727325453147 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.009 [2024-12-15 10:45:27.903388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.009 [2024-12-15 10:45:27.903513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3124191214444436315 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.009 [2024-12-15 10:45:27.903536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.009 #60 NEW cov: 11890 ft: 14162 corp: 19/1434b lim: 105 exec/s: 60 rss: 70Mb L: 48/99 MS: 3 ChangeByte-CopyPart-CrossOver- 00:07:39.009 [2024-12-15 10:45:27.942973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955727325453147 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.009 [2024-12-15 10:45:27.943006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.009 [2024-12-15 10:45:27.943148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3124191214444436315 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.009 [2024-12-15 10:45:27.943171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.009 #61 NEW cov: 11890 ft: 14208 corp: 20/1482b lim: 105 exec/s: 61 rss: 70Mb L: 48/99 MS: 1 ChangeBinInt- 00:07:39.009 [2024-12-15 10:45:27.993311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.009 [2024-12-15 10:45:27.993344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.009 [2024-12-15 10:45:27.993464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.009 [2024-12-15 10:45:27.993491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.009 [2024-12-15 10:45:27.993614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.009 [2024-12-15 10:45:27.993641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.009 #62 NEW cov: 11890 ft: 14239 corp: 21/1555b lim: 105 exec/s: 62 rss: 70Mb L: 73/99 MS: 1 InsertRepeatedBytes- 00:07:39.269 [2024-12-15 10:45:28.054119] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:25562 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.269 [2024-12-15 10:45:28.054149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.269 [2024-12-15 10:45:28.054247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.269 [2024-12-15 10:45:28.054269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.269 [2024-12-15 10:45:28.054389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.269 [2024-12-15 10:45:28.054409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.269 [2024-12-15 10:45:28.054553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.269 [2024-12-15 10:45:28.054576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.269 #63 NEW cov: 11890 ft: 14300 corp: 22/1654b lim: 105 exec/s: 63 rss: 70Mb L: 99/99 MS: 1 ChangeBinInt- 00:07:39.269 [2024-12-15 10:45:28.103970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6569163454406155099 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.269 [2024-12-15 10:45:28.104003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.269 [2024-12-15 10:45:28.104102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.269 [2024-12-15 10:45:28.104124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.269 [2024-12-15 10:45:28.104241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.269 [2024-12-15 10:45:28.104264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.269 [2024-12-15 10:45:28.104387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.269 [2024-12-15 10:45:28.104408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.269 #64 NEW cov: 11890 ft: 14328 corp: 23/1754b lim: 105 exec/s: 64 rss: 70Mb L: 100/100 MS: 1 InsertByte- 00:07:39.269 [2024-12-15 10:45:28.153974] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955727325453147 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.269 [2024-12-15 10:45:28.154011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.269 [2024-12-15 10:45:28.154054] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3124191214444436315 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.269 [2024-12-15 10:45:28.154066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.269 #65 NEW cov: 11899 ft: 14412 corp: 24/1802b lim: 105 exec/s: 65 rss: 70Mb L: 48/100 MS: 1 ChangeByte- 00:07:39.269 [2024-12-15 10:45:28.194123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.269 [2024-12-15 10:45:28.194155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.269 [2024-12-15 10:45:28.194254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6582956269430856539 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.269 [2024-12-15 10:45:28.194274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.269 [2024-12-15 10:45:28.194392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.269 [2024-12-15 10:45:28.194413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.269 [2024-12-15 10:45:28.194534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.269 [2024-12-15 10:45:28.194561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.269 #66 NEW cov: 11899 ft: 14414 corp: 25/1899b lim: 105 exec/s: 66 rss: 70Mb L: 97/100 MS: 1 ChangeByte- 00:07:39.269 [2024-12-15 10:45:28.234294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955727325453147 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.269 [2024-12-15 10:45:28.234325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.269 [2024-12-15 10:45:28.234441] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3124191214444436315 len:56412 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.269 [2024-12-15 10:45:28.234467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.269 #67 NEW cov: 11899 ft: 14441 corp: 26/1948b lim: 105 exec/s: 67 rss: 70Mb L: 49/100 MS: 1 InsertByte- 00:07:39.529 [2024-12-15 10:45:28.294926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.529 [2024-12-15 10:45:28.294959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.529 [2024-12-15 10:45:28.295067] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.529 [2024-12-15 10:45:28.295092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.529 [2024-12-15 10:45:28.295211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.529 [2024-12-15 10:45:28.295234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.529 [2024-12-15 10:45:28.295360] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.529 [2024-12-15 10:45:28.295383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.530 #73 NEW cov: 11899 ft: 14500 corp: 27/2051b lim: 105 exec/s: 73 rss: 70Mb L: 103/103 MS: 1 InsertRepeatedBytes- 00:07:39.530 [2024-12-15 10:45:28.344763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6569163454406155099 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.344791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.530 [2024-12-15 10:45:28.344889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.344911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.530 [2024-12-15 10:45:28.345033] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6582955730387196377 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.345052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.530 [2024-12-15 10:45:28.345171] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.345190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.530 [2024-12-15 10:45:28.345313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:9393201897866353499 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.345335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:39.530 #74 NEW cov: 11899 ft: 14529 corp: 28/2156b lim: 105 exec/s: 74 rss: 70Mb L: 105/105 MS: 1 CopyPart- 00:07:39.530 [2024-12-15 10:45:28.395158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446702124948520959 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.395189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.530 [2024-12-15 10:45:28.395264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:15697817505862638041 len:9178 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.395288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.530 [2024-12-15 10:45:28.395404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.395430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.530 [2024-12-15 10:45:28.395553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.395577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.530 #75 NEW cov: 11899 ft: 14546 corp: 29/2247b lim: 105 exec/s: 75 rss: 70Mb L: 91/105 MS: 1 InsertByte- 00:07:39.530 [2024-12-15 10:45:28.435163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.435194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.530 [2024-12-15 10:45:28.435297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6582956269430856539 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.435317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.530 [2024-12-15 10:45:28.435441] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.435464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.530 [2024-12-15 10:45:28.435586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817505862637869 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.435605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.530 #76 NEW cov: 11899 ft: 14566 corp: 30/2344b lim: 105 exec/s: 76 rss: 70Mb L: 97/105 MS: 1 ChangeByte- 00:07:39.530 [2024-12-15 10:45:28.484808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955727325453147 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.484840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.530 [2024-12-15 10:45:28.484960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3124191214444436315 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.484982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.530 #77 NEW cov: 11899 ft: 14581 corp: 31/2392b lim: 105 exec/s: 77 rss: 70Mb L: 48/105 MS: 1 ChangeByte- 00:07:39.530 [2024-12-15 10:45:28.525529] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446702124948520959 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.525561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.530 [2024-12-15 10:45:28.525688] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:15697817504101030361 len:9178 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.525711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.530 [2024-12-15 10:45:28.525830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.525851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.530 [2024-12-15 10:45:28.525974] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.530 [2024-12-15 10:45:28.525998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.790 #78 NEW cov: 11899 ft: 14603 corp: 32/2483b lim: 105 exec/s: 78 rss: 70Mb L: 91/105 MS: 1 ChangeByte- 00:07:39.790 [2024-12-15 10:45:28.575348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.790 [2024-12-15 10:45:28.575374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.790 [2024-12-15 10:45:28.575501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.790 [2024-12-15 10:45:28.575525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.790 #79 NEW cov: 11899 ft: 14611 corp: 33/2530b lim: 105 exec/s: 79 rss: 70Mb L: 47/105 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:07:39.790 [2024-12-15 10:45:28.615169] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955894829165403 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.790 [2024-12-15 10:45:28.615197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.790 #80 NEW cov: 11899 ft: 15056 corp: 34/2555b lim: 105 exec/s: 80 rss: 70Mb L: 25/105 MS: 1 EraseBytes- 00:07:39.790 [2024-12-15 10:45:28.665527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446702124948520959 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.790 [2024-12-15 10:45:28.665560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.790 [2024-12-15 10:45:28.665671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.790 [2024-12-15 10:45:28.665695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.790 [2024-12-15 10:45:28.665809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.790 [2024-12-15 10:45:28.665833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.790 [2024-12-15 10:45:28.665953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.790 [2024-12-15 10:45:28.665979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.790 #81 NEW cov: 11899 ft: 15066 corp: 35/2645b lim: 105 exec/s: 81 rss: 70Mb L: 90/105 MS: 1 ChangeBit- 00:07:39.790 [2024-12-15 10:45:28.706002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18436006075833516031 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.790 [2024-12-15 10:45:28.706033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.790 [2024-12-15 10:45:28.706135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446702129058938879 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.790 [2024-12-15 10:45:28.706158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.790 [2024-12-15 10:45:28.706273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.790 [2024-12-15 10:45:28.706293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.790 [2024-12-15 10:45:28.706421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11086131485304543705 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.790 [2024-12-15 10:45:28.706445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.790 #82 NEW cov: 11899 ft: 15077 corp: 36/2742b lim: 105 exec/s: 82 rss: 70Mb L: 97/105 MS: 1 ChangeBit- 00:07:39.790 [2024-12-15 10:45:28.756199] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.790 [2024-12-15 10:45:28.756229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.790 [2024-12-15 10:45:28.756329] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.790 [2024-12-15 10:45:28.756351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.790 [2024-12-15 10:45:28.756472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.790 [2024-12-15 10:45:28.756495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.790 [2024-12-15 10:45:28.756624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.790 [2024-12-15 10:45:28.756649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.790 #83 NEW cov: 11899 ft: 15107 corp: 37/2833b lim: 105 exec/s: 83 rss: 70Mb L: 91/105 MS: 1 EraseBytes- 00:07:39.790 [2024-12-15 10:45:28.796091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18436006075833516031 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.790 [2024-12-15 10:45:28.796122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.790 [2024-12-15 10:45:28.796213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446702129058938879 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.790 [2024-12-15 10:45:28.796236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.790 [2024-12-15 10:45:28.796352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.791 [2024-12-15 10:45:28.796372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.791 [2024-12-15 10:45:28.796495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15697817503731931609 len:55770 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.791 [2024-12-15 10:45:28.796516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.791 [2024-12-15 10:45:28.796645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446744073069517311 len:65498 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.791 [2024-12-15 10:45:28.796667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:40.051 #84 NEW cov: 11899 ft: 15117 corp: 38/2938b lim: 105 exec/s: 84 rss: 70Mb L: 105/105 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:07:40.051 [2024-12-15 10:45:28.836375] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.051 [2024-12-15 10:45:28.836404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.051 [2024-12-15 10:45:28.836504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:10778685752873424277 len:38254 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.051 [2024-12-15 10:45:28.836521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.051 [2024-12-15 10:45:28.836647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.051 [2024-12-15 10:45:28.836668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.051 [2024-12-15 10:45:28.836796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:10778685752873424277 len:38294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.051 [2024-12-15 10:45:28.836819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.051 #85 NEW cov: 11899 ft: 15135 corp: 39/3036b lim: 105 exec/s: 85 rss: 70Mb L: 98/105 MS: 1 ChangeByte- 00:07:40.051 [2024-12-15 10:45:28.886136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.051 [2024-12-15 10:45:28.886167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.051 [2024-12-15 10:45:28.886301] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6569444927850175323 len:33372 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.051 [2024-12-15 10:45:28.886324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.051 #86 NEW cov: 11899 ft: 15216 corp: 40/3084b lim: 105 exec/s: 43 rss: 70Mb L: 48/105 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:07:40.051 #86 DONE cov: 11899 ft: 15216 corp: 40/3084b lim: 105 exec/s: 43 rss: 70Mb 00:07:40.051 ###### Recommended dictionary. ###### 00:07:40.051 "\377\377\377\377\377\377\377\377" # Uses: 5 00:07:40.051 "\000\000\000\000" # Uses: 1 00:07:40.051 ###### End of recommended dictionary. ###### 00:07:40.051 Done 86 runs in 2 second(s) 00:07:40.051 10:45:29 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_16.conf 00:07:40.051 10:45:29 -- ../common.sh@72 -- # (( i++ )) 00:07:40.051 10:45:29 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:40.051 10:45:29 -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:40.051 10:45:29 -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:40.051 10:45:29 -- nvmf/run.sh@24 -- # local timen=1 00:07:40.051 10:45:29 -- nvmf/run.sh@25 -- # local core=0x1 00:07:40.051 10:45:29 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:40.051 10:45:29 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:40.051 10:45:29 -- nvmf/run.sh@29 -- # printf %02d 17 00:07:40.051 10:45:29 -- nvmf/run.sh@29 -- # port=4417 00:07:40.051 10:45:29 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:40.051 10:45:29 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:40.051 10:45:29 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:40.051 10:45:29 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 -r /var/tmp/spdk17.sock 00:07:40.310 [2024-12-15 10:45:29.081386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.310 [2024-12-15 10:45:29.081480] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1314439 ] 00:07:40.310 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.569 [2024-12-15 10:45:29.336485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.569 [2024-12-15 10:45:29.421910] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:40.569 [2024-12-15 10:45:29.422055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.569 [2024-12-15 10:45:29.479797] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.569 [2024-12-15 10:45:29.496092] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:40.569 INFO: Running with entropic power schedule (0xFF, 100). 00:07:40.569 INFO: Seed: 1411219391 00:07:40.569 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:40.569 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:40.569 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:40.569 INFO: A corpus is not provided, starting from an empty corpus 00:07:40.569 #2 INITED exec/s: 0 rss: 60Mb 00:07:40.569 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:40.569 This may also happen if the target rejected all inputs we tried so far 00:07:40.569 [2024-12-15 10:45:29.567006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.569 [2024-12-15 10:45:29.567052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.569 [2024-12-15 10:45:29.567110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.569 [2024-12-15 10:45:29.567129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.569 [2024-12-15 10:45:29.567196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.569 [2024-12-15 10:45:29.567215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.569 [2024-12-15 10:45:29.567284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.569 [2024-12-15 10:45:29.567303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.088 NEW_FUNC[1/672]: 0x4545f8 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:41.088 NEW_FUNC[2/672]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:41.088 #9 NEW cov: 11681 ft: 11685 corp: 2/116b lim: 120 exec/s: 0 rss: 68Mb L: 115/115 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:41.088 [2024-12-15 10:45:29.896963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.088 [2024-12-15 10:45:29.897015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.088 [2024-12-15 10:45:29.897152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.088 [2024-12-15 10:45:29.897183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.088 [2024-12-15 10:45:29.897303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.088 [2024-12-15 10:45:29.897329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.088 [2024-12-15 10:45:29.897454] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.088 [2024-12-15 10:45:29.897482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.088 #15 NEW cov: 11797 ft: 12263 corp: 3/231b lim: 120 exec/s: 0 rss: 68Mb L: 115/115 MS: 1 ShuffleBytes- 00:07:41.088 [2024-12-15 10:45:29.946989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.088 [2024-12-15 10:45:29.947021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.088 [2024-12-15 10:45:29.947123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.088 [2024-12-15 10:45:29.947143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.088 [2024-12-15 10:45:29.947259] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.088 [2024-12-15 10:45:29.947282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.088 [2024-12-15 10:45:29.947402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.088 [2024-12-15 10:45:29.947430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.088 #16 NEW cov: 11803 ft: 12660 corp: 4/350b lim: 120 exec/s: 0 rss: 68Mb L: 119/119 MS: 1 CopyPart- 00:07:41.088 [2024-12-15 10:45:29.987131] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.088 [2024-12-15 10:45:29.987165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.088 [2024-12-15 10:45:29.987249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.088 [2024-12-15 10:45:29.987273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.088 [2024-12-15 10:45:29.987387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.088 [2024-12-15 10:45:29.987409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.088 [2024-12-15 10:45:29.987527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.088 [2024-12-15 10:45:29.987551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.088 #17 NEW cov: 11888 ft: 12893 corp: 5/466b lim: 120 exec/s: 0 rss: 68Mb L: 116/119 MS: 1 InsertByte- 00:07:41.088 [2024-12-15 10:45:30.036954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.088 [2024-12-15 10:45:30.036990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.088 [2024-12-15 10:45:30.037111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.088 [2024-12-15 10:45:30.037136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.088 #23 NEW cov: 11888 ft: 13480 corp: 6/528b lim: 120 exec/s: 0 rss: 68Mb L: 62/119 MS: 1 EraseBytes- 00:07:41.088 [2024-12-15 10:45:30.087459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.088 [2024-12-15 10:45:30.087492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.088 [2024-12-15 10:45:30.087604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.088 [2024-12-15 10:45:30.087628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.088 [2024-12-15 10:45:30.087749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.088 [2024-12-15 10:45:30.087772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.089 [2024-12-15 10:45:30.087889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.089 [2024-12-15 10:45:30.087908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.348 #24 NEW cov: 11888 ft: 13637 corp: 7/647b lim: 120 exec/s: 0 rss: 68Mb L: 119/119 MS: 1 CopyPart- 00:07:41.348 [2024-12-15 10:45:30.127529] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.127558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.348 [2024-12-15 10:45:30.127641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.127663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.348 [2024-12-15 10:45:30.127776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.127798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.348 [2024-12-15 10:45:30.127915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.127936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.348 #25 NEW cov: 11888 ft: 13766 corp: 8/762b lim: 120 exec/s: 0 rss: 68Mb L: 115/119 MS: 1 ChangeBit- 00:07:41.348 [2024-12-15 10:45:30.167600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.167630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.348 [2024-12-15 10:45:30.167732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.167755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.348 [2024-12-15 10:45:30.167864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.167888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.348 [2024-12-15 10:45:30.168001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.168020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.348 #26 NEW cov: 11888 ft: 13809 corp: 9/878b lim: 120 exec/s: 0 rss: 68Mb L: 116/119 MS: 1 InsertByte- 00:07:41.348 [2024-12-15 10:45:30.217909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.217936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.348 [2024-12-15 10:45:30.218061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.218085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.348 [2024-12-15 10:45:30.218207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.218231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.348 [2024-12-15 10:45:30.218348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.218373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.348 #27 NEW cov: 11888 ft: 13876 corp: 10/994b lim: 120 exec/s: 0 rss: 68Mb L: 116/119 MS: 1 InsertByte- 00:07:41.348 [2024-12-15 10:45:30.258092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.258126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.348 [2024-12-15 10:45:30.258249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.258272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.348 [2024-12-15 10:45:30.258390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.258418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.348 [2024-12-15 10:45:30.258543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.258566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.348 #28 NEW cov: 11888 ft: 13907 corp: 11/1113b lim: 120 exec/s: 0 rss: 68Mb L: 119/119 MS: 1 CrossOver- 00:07:41.348 [2024-12-15 10:45:30.297594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.297626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.348 [2024-12-15 10:45:30.297748] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4278190080 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.297773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.348 #29 NEW cov: 11888 ft: 13949 corp: 12/1175b lim: 120 exec/s: 0 rss: 68Mb L: 62/119 MS: 1 ChangeBinInt- 00:07:41.348 [2024-12-15 10:45:30.348523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.348551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.348 [2024-12-15 10:45:30.348645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.348664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.348 [2024-12-15 10:45:30.348780] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.348804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.348 [2024-12-15 10:45:30.348921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.348943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.348 [2024-12-15 10:45:30.349055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073642442751 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.348 [2024-12-15 10:45:30.349078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:41.608 #30 NEW cov: 11888 ft: 13996 corp: 13/1295b lim: 120 exec/s: 0 rss: 68Mb L: 120/120 MS: 1 InsertRepeatedBytes- 00:07:41.608 [2024-12-15 10:45:30.388446] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.608 [2024-12-15 10:45:30.388474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.608 [2024-12-15 10:45:30.388564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.608 [2024-12-15 10:45:30.388586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.608 [2024-12-15 10:45:30.388703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.608 [2024-12-15 10:45:30.388724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.608 [2024-12-15 10:45:30.388836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.608 [2024-12-15 10:45:30.388859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.608 #31 NEW cov: 11888 ft: 14039 corp: 14/1410b lim: 120 exec/s: 0 rss: 68Mb L: 115/120 MS: 1 ChangeBit- 00:07:41.608 [2024-12-15 10:45:30.438072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.608 [2024-12-15 10:45:30.438102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.608 [2024-12-15 10:45:30.438227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.608 [2024-12-15 10:45:30.438250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.608 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:41.608 #32 NEW cov: 11911 ft: 14078 corp: 15/1472b lim: 120 exec/s: 0 rss: 69Mb L: 62/120 MS: 1 ChangeBinInt- 00:07:41.608 [2024-12-15 10:45:30.478628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.608 [2024-12-15 10:45:30.478659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.608 [2024-12-15 10:45:30.478739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.608 [2024-12-15 10:45:30.478760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.608 [2024-12-15 10:45:30.478872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.608 [2024-12-15 10:45:30.478892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.608 [2024-12-15 10:45:30.479011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.608 [2024-12-15 10:45:30.479036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.608 #33 NEW cov: 11911 ft: 14103 corp: 16/1579b lim: 120 exec/s: 0 rss: 69Mb L: 107/120 MS: 1 EraseBytes- 00:07:41.608 [2024-12-15 10:45:30.518841] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.608 [2024-12-15 10:45:30.518874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.608 [2024-12-15 10:45:30.518960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.608 [2024-12-15 10:45:30.518986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.609 [2024-12-15 10:45:30.519102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.609 [2024-12-15 10:45:30.519152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.609 [2024-12-15 10:45:30.519271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.609 [2024-12-15 10:45:30.519295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.609 #34 NEW cov: 11911 ft: 14139 corp: 17/1696b lim: 120 exec/s: 34 rss: 69Mb L: 117/120 MS: 1 CrossOver- 00:07:41.609 [2024-12-15 10:45:30.558857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.609 [2024-12-15 10:45:30.558886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.609 [2024-12-15 10:45:30.559002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.609 [2024-12-15 10:45:30.559021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.609 [2024-12-15 10:45:30.559141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.609 [2024-12-15 10:45:30.559159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.609 [2024-12-15 10:45:30.559277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.609 [2024-12-15 10:45:30.559298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.609 #35 NEW cov: 11911 ft: 14165 corp: 18/1815b lim: 120 exec/s: 35 rss: 69Mb L: 119/120 MS: 1 CrossOver- 00:07:41.609 [2024-12-15 10:45:30.598954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.609 [2024-12-15 10:45:30.598986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.609 [2024-12-15 10:45:30.599083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.609 [2024-12-15 10:45:30.599108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.609 [2024-12-15 10:45:30.599232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.609 [2024-12-15 10:45:30.599253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.609 [2024-12-15 10:45:30.599369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.609 [2024-12-15 10:45:30.599393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.609 #36 NEW cov: 11911 ft: 14172 corp: 19/1931b lim: 120 exec/s: 36 rss: 69Mb L: 116/120 MS: 1 InsertByte- 00:07:41.869 [2024-12-15 10:45:30.639074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.869 [2024-12-15 10:45:30.639106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.869 [2024-12-15 10:45:30.639234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:49152 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.869 [2024-12-15 10:45:30.639256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.869 [2024-12-15 10:45:30.639375] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.869 [2024-12-15 10:45:30.639395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.869 [2024-12-15 10:45:30.639527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.869 [2024-12-15 10:45:30.639551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.869 #37 NEW cov: 11911 ft: 14188 corp: 20/2050b lim: 120 exec/s: 37 rss: 70Mb L: 119/120 MS: 1 ChangeBit- 00:07:41.869 [2024-12-15 10:45:30.678869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.869 [2024-12-15 10:45:30.678895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.869 [2024-12-15 10:45:30.679019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.869 [2024-12-15 10:45:30.679041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.869 #38 NEW cov: 11911 ft: 14208 corp: 21/2108b lim: 120 exec/s: 38 rss: 70Mb L: 58/120 MS: 1 EraseBytes- 00:07:41.869 [2024-12-15 10:45:30.729302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.869 [2024-12-15 10:45:30.729334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.869 [2024-12-15 10:45:30.729426] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.869 [2024-12-15 10:45:30.729465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.869 [2024-12-15 10:45:30.729588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.870 [2024-12-15 10:45:30.729609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.870 [2024-12-15 10:45:30.729731] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.870 [2024-12-15 10:45:30.729755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.870 #39 NEW cov: 11911 ft: 14217 corp: 22/2227b lim: 120 exec/s: 39 rss: 70Mb L: 119/120 MS: 1 CopyPart- 00:07:41.870 [2024-12-15 10:45:30.769654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.870 [2024-12-15 10:45:30.769682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.870 [2024-12-15 10:45:30.769778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.870 [2024-12-15 10:45:30.769800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.870 [2024-12-15 10:45:30.769925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.870 [2024-12-15 10:45:30.769950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.870 [2024-12-15 10:45:30.770063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.870 [2024-12-15 10:45:30.770085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.870 [2024-12-15 10:45:30.770212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073642442751 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.870 [2024-12-15 10:45:30.770232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:41.870 #40 NEW cov: 11911 ft: 14235 corp: 23/2347b lim: 120 exec/s: 40 rss: 70Mb L: 120/120 MS: 1 CopyPart- 00:07:41.870 [2024-12-15 10:45:30.819830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.870 [2024-12-15 10:45:30.819861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.870 [2024-12-15 10:45:30.819982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.870 [2024-12-15 10:45:30.820005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.870 [2024-12-15 10:45:30.820127] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.870 [2024-12-15 10:45:30.820164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.870 [2024-12-15 10:45:30.820282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.870 [2024-12-15 10:45:30.820305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.870 [2024-12-15 10:45:30.820427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.870 [2024-12-15 10:45:30.820449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:41.870 #41 NEW cov: 11911 ft: 14247 corp: 24/2467b lim: 120 exec/s: 41 rss: 70Mb L: 120/120 MS: 1 InsertByte- 00:07:41.870 [2024-12-15 10:45:30.859659] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.870 [2024-12-15 10:45:30.859690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.870 [2024-12-15 10:45:30.859797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.870 [2024-12-15 10:45:30.859821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.870 [2024-12-15 10:45:30.859945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.870 [2024-12-15 10:45:30.859968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.870 [2024-12-15 10:45:30.860097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.870 [2024-12-15 10:45:30.860117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.870 #42 NEW cov: 11911 ft: 14257 corp: 25/2576b lim: 120 exec/s: 42 rss: 70Mb L: 109/120 MS: 1 CrossOver- 00:07:42.135 [2024-12-15 10:45:30.900129] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.135 [2024-12-15 10:45:30.900160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.135 [2024-12-15 10:45:30.900274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.135 [2024-12-15 10:45:30.900302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.136 [2024-12-15 10:45:30.900424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.136 [2024-12-15 10:45:30.900448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.136 [2024-12-15 10:45:30.900572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446673704965373951 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.136 [2024-12-15 10:45:30.900596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.136 [2024-12-15 10:45:30.900721] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073642442751 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.136 [2024-12-15 10:45:30.900744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:42.136 #43 NEW cov: 11911 ft: 14276 corp: 26/2696b lim: 120 exec/s: 43 rss: 70Mb L: 120/120 MS: 1 ChangeBit- 00:07:42.136 [2024-12-15 10:45:30.950149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446743051507335167 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.136 [2024-12-15 10:45:30.950180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.136 [2024-12-15 10:45:30.950294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.136 [2024-12-15 10:45:30.950316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.136 [2024-12-15 10:45:30.950436] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.136 [2024-12-15 10:45:30.950457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.136 [2024-12-15 10:45:30.950573] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446673704965373951 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.136 [2024-12-15 10:45:30.950599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.136 [2024-12-15 10:45:30.950720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073642442751 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.136 [2024-12-15 10:45:30.950742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:42.136 #44 NEW cov: 11911 ft: 14288 corp: 27/2816b lim: 120 exec/s: 44 rss: 70Mb L: 120/120 MS: 1 CMP- DE: "\021\000\000\000"- 00:07:42.136 [2024-12-15 10:45:30.990188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.136 [2024-12-15 10:45:30.990218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.136 [2024-12-15 10:45:30.990318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446726481523507199 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.136 [2024-12-15 10:45:30.990350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.136 [2024-12-15 10:45:30.990474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.136 [2024-12-15 10:45:30.990497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.136 [2024-12-15 10:45:30.990616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.136 [2024-12-15 10:45:30.990638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.136 #45 NEW cov: 11911 ft: 14345 corp: 28/2935b lim: 120 exec/s: 45 rss: 70Mb L: 119/120 MS: 1 ChangeBit- 00:07:42.136 [2024-12-15 10:45:31.030179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:792633530877214719 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.136 [2024-12-15 10:45:31.030212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.136 [2024-12-15 10:45:31.030307] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.136 [2024-12-15 10:45:31.030327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.136 [2024-12-15 10:45:31.030441] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.136 [2024-12-15 10:45:31.030463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.137 [2024-12-15 10:45:31.030577] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.137 [2024-12-15 10:45:31.030597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.137 #49 NEW cov: 11911 ft: 14378 corp: 29/3033b lim: 120 exec/s: 49 rss: 70Mb L: 98/120 MS: 4 ChangeBinInt-InsertByte-ChangeBinInt-CrossOver- 00:07:42.137 [2024-12-15 10:45:31.070592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.137 [2024-12-15 10:45:31.070623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.137 [2024-12-15 10:45:31.070726] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.137 [2024-12-15 10:45:31.070747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.137 [2024-12-15 10:45:31.070869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.137 [2024-12-15 10:45:31.070892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.137 [2024-12-15 10:45:31.071008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.137 [2024-12-15 10:45:31.071029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.137 [2024-12-15 10:45:31.071142] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073642442751 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.137 [2024-12-15 10:45:31.071161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:42.137 #50 NEW cov: 11911 ft: 14388 corp: 30/3153b lim: 120 exec/s: 50 rss: 70Mb L: 120/120 MS: 1 CopyPart- 00:07:42.137 [2024-12-15 10:45:31.110634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.137 [2024-12-15 10:45:31.110667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.137 [2024-12-15 10:45:31.110753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.137 [2024-12-15 10:45:31.110774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.137 [2024-12-15 10:45:31.110887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.137 [2024-12-15 10:45:31.110908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.137 [2024-12-15 10:45:31.111016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073575333887 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.137 [2024-12-15 10:45:31.111037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.137 [2024-12-15 10:45:31.111149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073642442751 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.137 [2024-12-15 10:45:31.111169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:42.137 #51 NEW cov: 11911 ft: 14434 corp: 31/3273b lim: 120 exec/s: 51 rss: 70Mb L: 120/120 MS: 1 ChangeBit- 00:07:42.407 [2024-12-15 10:45:31.150557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.150598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.150696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.150716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.150832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.150854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.150972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.150992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.408 #52 NEW cov: 11911 ft: 14488 corp: 32/3392b lim: 120 exec/s: 52 rss: 70Mb L: 119/120 MS: 1 ChangeBinInt- 00:07:42.408 [2024-12-15 10:45:31.190922] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72058620535111679 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.190951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.191038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.191058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.191170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.191194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.191313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446673704965373951 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.191333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.191450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073642442751 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.191470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:42.408 #53 NEW cov: 11911 ft: 14489 corp: 33/3512b lim: 120 exec/s: 53 rss: 70Mb L: 120/120 MS: 1 ChangeBinInt- 00:07:42.408 [2024-12-15 10:45:31.231101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.231130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.231202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.231220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.231339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.231359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.231482] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073575333887 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.231504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.231616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073642442751 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.231639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:42.408 #54 NEW cov: 11911 ft: 14563 corp: 34/3632b lim: 120 exec/s: 54 rss: 70Mb L: 120/120 MS: 1 ChangeBinInt- 00:07:42.408 [2024-12-15 10:45:31.271320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.271351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.271439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446726481523507199 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.271471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.271585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.271605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.271735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.271755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.271871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.271895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:42.408 #55 NEW cov: 11911 ft: 14574 corp: 35/3752b lim: 120 exec/s: 55 rss: 70Mb L: 120/120 MS: 1 CopyPart- 00:07:42.408 [2024-12-15 10:45:31.321365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.321397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.321526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.321552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.321669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.321690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.321807] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.321826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.321942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.321967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:42.408 #56 NEW cov: 11911 ft: 14586 corp: 36/3872b lim: 120 exec/s: 56 rss: 70Mb L: 120/120 MS: 1 CopyPart- 00:07:42.408 [2024-12-15 10:45:31.361225] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.361258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.361350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.361372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.361492] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.361513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.361639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.361662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.408 #57 NEW cov: 11911 ft: 14598 corp: 37/3988b lim: 120 exec/s: 57 rss: 70Mb L: 116/120 MS: 1 ChangeByte- 00:07:42.408 [2024-12-15 10:45:31.401382] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.401412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.401541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.401562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.401679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744069422448639 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.401702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.408 [2024-12-15 10:45:31.401819] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.408 [2024-12-15 10:45:31.401845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.668 #58 NEW cov: 11911 ft: 14606 corp: 38/4107b lim: 120 exec/s: 58 rss: 70Mb L: 119/120 MS: 1 ChangeBinInt- 00:07:42.668 [2024-12-15 10:45:31.451560] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.668 [2024-12-15 10:45:31.451592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.668 [2024-12-15 10:45:31.451685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.668 [2024-12-15 10:45:31.451710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.668 [2024-12-15 10:45:31.451831] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.668 [2024-12-15 10:45:31.451851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.668 [2024-12-15 10:45:31.451972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.668 [2024-12-15 10:45:31.451994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.668 #59 NEW cov: 11911 ft: 14616 corp: 39/4216b lim: 120 exec/s: 59 rss: 70Mb L: 109/120 MS: 1 ShuffleBytes- 00:07:42.668 [2024-12-15 10:45:31.501625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.668 [2024-12-15 10:45:31.501659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.668 [2024-12-15 10:45:31.501771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.668 [2024-12-15 10:45:31.501793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.668 [2024-12-15 10:45:31.501900] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.668 [2024-12-15 10:45:31.501922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.668 [2024-12-15 10:45:31.502029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:43520 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.668 [2024-12-15 10:45:31.502049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.668 #60 NEW cov: 11911 ft: 14623 corp: 40/4332b lim: 120 exec/s: 60 rss: 70Mb L: 116/120 MS: 1 ChangeByte- 00:07:42.668 [2024-12-15 10:45:31.551392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744071931559935 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.668 [2024-12-15 10:45:31.551431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.668 [2024-12-15 10:45:31.551554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.668 [2024-12-15 10:45:31.551576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.668 #65 NEW cov: 11911 ft: 14636 corp: 41/4382b lim: 120 exec/s: 32 rss: 70Mb L: 50/120 MS: 5 CopyPart-ChangeBit-InsertByte-ChangeBinInt-CrossOver- 00:07:42.668 #65 DONE cov: 11911 ft: 14636 corp: 41/4382b lim: 120 exec/s: 32 rss: 70Mb 00:07:42.668 ###### Recommended dictionary. ###### 00:07:42.668 "\021\000\000\000" # Uses: 0 00:07:42.668 ###### End of recommended dictionary. ###### 00:07:42.668 Done 65 runs in 2 second(s) 00:07:42.927 10:45:31 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_17.conf 00:07:42.927 10:45:31 -- ../common.sh@72 -- # (( i++ )) 00:07:42.927 10:45:31 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:42.927 10:45:31 -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:42.927 10:45:31 -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:42.927 10:45:31 -- nvmf/run.sh@24 -- # local timen=1 00:07:42.927 10:45:31 -- nvmf/run.sh@25 -- # local core=0x1 00:07:42.927 10:45:31 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:42.927 10:45:31 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:42.927 10:45:31 -- nvmf/run.sh@29 -- # printf %02d 18 00:07:42.927 10:45:31 -- nvmf/run.sh@29 -- # port=4418 00:07:42.927 10:45:31 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:42.927 10:45:31 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:42.927 10:45:31 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:42.927 10:45:31 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 -r /var/tmp/spdk18.sock 00:07:42.927 [2024-12-15 10:45:31.738410] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:42.928 [2024-12-15 10:45:31.738479] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1314935 ] 00:07:42.928 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.187 [2024-12-15 10:45:31.994286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.187 [2024-12-15 10:45:32.077565] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:43.187 [2024-12-15 10:45:32.077708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.187 [2024-12-15 10:45:32.135498] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.187 [2024-12-15 10:45:32.151805] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:43.187 INFO: Running with entropic power schedule (0xFF, 100). 00:07:43.187 INFO: Seed: 4069218503 00:07:43.187 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:43.187 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:43.187 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:43.187 INFO: A corpus is not provided, starting from an empty corpus 00:07:43.187 #2 INITED exec/s: 0 rss: 60Mb 00:07:43.187 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:43.187 This may also happen if the target rejected all inputs we tried so far 00:07:43.187 [2024-12-15 10:45:32.197096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.187 [2024-12-15 10:45:32.197126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.187 [2024-12-15 10:45:32.197165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.187 [2024-12-15 10:45:32.197178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.187 [2024-12-15 10:45:32.197227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.187 [2024-12-15 10:45:32.197241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.187 [2024-12-15 10:45:32.197291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.187 [2024-12-15 10:45:32.197303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.705 NEW_FUNC[1/670]: 0x457e58 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:43.705 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:43.705 #14 NEW cov: 11628 ft: 11629 corp: 2/92b lim: 100 exec/s: 0 rss: 68Mb L: 91/91 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:43.705 [2024-12-15 10:45:32.517990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.705 [2024-12-15 10:45:32.518021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.705 [2024-12-15 10:45:32.518073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.705 [2024-12-15 10:45:32.518088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.705 [2024-12-15 10:45:32.518140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.705 [2024-12-15 10:45:32.518154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.705 [2024-12-15 10:45:32.518205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.705 [2024-12-15 10:45:32.518219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.705 #19 NEW cov: 11741 ft: 12021 corp: 3/178b lim: 100 exec/s: 0 rss: 69Mb L: 86/91 MS: 5 InsertByte-ShuffleBytes-ChangeBinInt-CrossOver-CrossOver- 00:07:43.705 [2024-12-15 10:45:32.557988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.705 [2024-12-15 10:45:32.558014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.705 [2024-12-15 10:45:32.558062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.705 [2024-12-15 10:45:32.558077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.705 [2024-12-15 10:45:32.558129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.705 [2024-12-15 10:45:32.558145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.705 [2024-12-15 10:45:32.558199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.705 [2024-12-15 10:45:32.558213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.705 #20 NEW cov: 11747 ft: 12279 corp: 4/269b lim: 100 exec/s: 0 rss: 69Mb L: 91/91 MS: 1 CrossOver- 00:07:43.705 [2024-12-15 10:45:32.597906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.705 [2024-12-15 10:45:32.597933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.705 [2024-12-15 10:45:32.597985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.705 [2024-12-15 10:45:32.598000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.705 #26 NEW cov: 11832 ft: 12866 corp: 5/324b lim: 100 exec/s: 0 rss: 69Mb L: 55/91 MS: 1 EraseBytes- 00:07:43.705 [2024-12-15 10:45:32.638264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.705 [2024-12-15 10:45:32.638289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.705 [2024-12-15 10:45:32.638338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.705 [2024-12-15 10:45:32.638352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.705 [2024-12-15 10:45:32.638405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.705 [2024-12-15 10:45:32.638422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.705 [2024-12-15 10:45:32.638474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.705 [2024-12-15 10:45:32.638488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.705 #27 NEW cov: 11832 ft: 12981 corp: 6/415b lim: 100 exec/s: 0 rss: 69Mb L: 91/91 MS: 1 ChangeBit- 00:07:43.705 [2024-12-15 10:45:32.678357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.705 [2024-12-15 10:45:32.678383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.705 [2024-12-15 10:45:32.678452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.705 [2024-12-15 10:45:32.678466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.705 [2024-12-15 10:45:32.678517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.705 [2024-12-15 10:45:32.678530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.705 [2024-12-15 10:45:32.678593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.705 [2024-12-15 10:45:32.678607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.705 #28 NEW cov: 11832 ft: 13157 corp: 7/506b lim: 100 exec/s: 0 rss: 69Mb L: 91/91 MS: 1 CMP- DE: "\012\000"- 00:07:43.965 [2024-12-15 10:45:32.718513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.965 [2024-12-15 10:45:32.718539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.718575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.965 [2024-12-15 10:45:32.718590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.718642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.965 [2024-12-15 10:45:32.718655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.718709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.965 [2024-12-15 10:45:32.718722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.965 #29 NEW cov: 11832 ft: 13256 corp: 8/597b lim: 100 exec/s: 0 rss: 69Mb L: 91/91 MS: 1 ChangeByte- 00:07:43.965 [2024-12-15 10:45:32.758659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.965 [2024-12-15 10:45:32.758686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.758736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.965 [2024-12-15 10:45:32.758752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.758801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.965 [2024-12-15 10:45:32.758815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.758866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.965 [2024-12-15 10:45:32.758881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.965 #30 NEW cov: 11832 ft: 13298 corp: 9/688b lim: 100 exec/s: 0 rss: 69Mb L: 91/91 MS: 1 ChangeByte- 00:07:43.965 [2024-12-15 10:45:32.798751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.965 [2024-12-15 10:45:32.798780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.798816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.965 [2024-12-15 10:45:32.798830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.798883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.965 [2024-12-15 10:45:32.798898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.798951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.965 [2024-12-15 10:45:32.798965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.965 #31 NEW cov: 11832 ft: 13323 corp: 10/773b lim: 100 exec/s: 0 rss: 69Mb L: 85/91 MS: 1 InsertRepeatedBytes- 00:07:43.965 [2024-12-15 10:45:32.838865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.965 [2024-12-15 10:45:32.838891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.838951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.965 [2024-12-15 10:45:32.838965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.839017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.965 [2024-12-15 10:45:32.839032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.839084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.965 [2024-12-15 10:45:32.839098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.965 #32 NEW cov: 11832 ft: 13346 corp: 11/864b lim: 100 exec/s: 0 rss: 69Mb L: 91/91 MS: 1 CrossOver- 00:07:43.965 [2024-12-15 10:45:32.878992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.965 [2024-12-15 10:45:32.879018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.879068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.965 [2024-12-15 10:45:32.879082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.879135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.965 [2024-12-15 10:45:32.879150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.879204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.965 [2024-12-15 10:45:32.879218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.965 #33 NEW cov: 11832 ft: 13360 corp: 12/955b lim: 100 exec/s: 0 rss: 69Mb L: 91/91 MS: 1 ShuffleBytes- 00:07:43.965 [2024-12-15 10:45:32.919082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.965 [2024-12-15 10:45:32.919108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.919151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.965 [2024-12-15 10:45:32.919169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.919219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.965 [2024-12-15 10:45:32.919233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.919284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.965 [2024-12-15 10:45:32.919298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.965 #34 NEW cov: 11832 ft: 13468 corp: 13/1046b lim: 100 exec/s: 0 rss: 69Mb L: 91/91 MS: 1 ChangeBit- 00:07:43.965 [2024-12-15 10:45:32.959179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.965 [2024-12-15 10:45:32.959204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.959270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.965 [2024-12-15 10:45:32.959285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.959337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.965 [2024-12-15 10:45:32.959350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.965 [2024-12-15 10:45:32.959390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.965 [2024-12-15 10:45:32.959404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.225 #35 NEW cov: 11832 ft: 13629 corp: 14/1138b lim: 100 exec/s: 0 rss: 69Mb L: 92/92 MS: 1 CopyPart- 00:07:44.225 [2024-12-15 10:45:32.999087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.225 [2024-12-15 10:45:32.999112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.225 [2024-12-15 10:45:32.999149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.225 [2024-12-15 10:45:32.999163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.225 #36 NEW cov: 11832 ft: 13651 corp: 15/1178b lim: 100 exec/s: 0 rss: 69Mb L: 40/92 MS: 1 EraseBytes- 00:07:44.225 [2024-12-15 10:45:33.039222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.225 [2024-12-15 10:45:33.039247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.225 [2024-12-15 10:45:33.039293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.225 [2024-12-15 10:45:33.039307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.225 #37 NEW cov: 11832 ft: 13656 corp: 16/1233b lim: 100 exec/s: 0 rss: 69Mb L: 55/92 MS: 1 ShuffleBytes- 00:07:44.225 [2024-12-15 10:45:33.079565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.225 [2024-12-15 10:45:33.079590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.225 [2024-12-15 10:45:33.079636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.225 [2024-12-15 10:45:33.079650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.225 [2024-12-15 10:45:33.079702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.225 [2024-12-15 10:45:33.079735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.225 [2024-12-15 10:45:33.079790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.225 [2024-12-15 10:45:33.079804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.225 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:44.225 #38 NEW cov: 11855 ft: 13786 corp: 17/1317b lim: 100 exec/s: 0 rss: 69Mb L: 84/92 MS: 1 CrossOver- 00:07:44.225 [2024-12-15 10:45:33.119473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.225 [2024-12-15 10:45:33.119499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.225 [2024-12-15 10:45:33.119534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.225 [2024-12-15 10:45:33.119548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.225 #39 NEW cov: 11855 ft: 13812 corp: 18/1375b lim: 100 exec/s: 0 rss: 70Mb L: 58/92 MS: 1 InsertRepeatedBytes- 00:07:44.225 [2024-12-15 10:45:33.159572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.225 [2024-12-15 10:45:33.159597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.225 [2024-12-15 10:45:33.159632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.225 [2024-12-15 10:45:33.159646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.225 #40 NEW cov: 11855 ft: 13838 corp: 19/1419b lim: 100 exec/s: 0 rss: 70Mb L: 44/92 MS: 1 CMP- DE: "\000\000\000\010"- 00:07:44.225 [2024-12-15 10:45:33.199916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.225 [2024-12-15 10:45:33.199942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.225 [2024-12-15 10:45:33.199992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.225 [2024-12-15 10:45:33.200004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.225 [2024-12-15 10:45:33.200056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.225 [2024-12-15 10:45:33.200069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.225 [2024-12-15 10:45:33.200121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.225 [2024-12-15 10:45:33.200134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.225 #41 NEW cov: 11855 ft: 13869 corp: 20/1517b lim: 100 exec/s: 41 rss: 70Mb L: 98/98 MS: 1 CrossOver- 00:07:44.485 [2024-12-15 10:45:33.240061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.485 [2024-12-15 10:45:33.240087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.485 [2024-12-15 10:45:33.240143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.485 [2024-12-15 10:45:33.240157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.485 [2024-12-15 10:45:33.240209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.485 [2024-12-15 10:45:33.240222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.485 [2024-12-15 10:45:33.240277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.485 [2024-12-15 10:45:33.240291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.485 #42 NEW cov: 11855 ft: 13886 corp: 21/1609b lim: 100 exec/s: 42 rss: 70Mb L: 92/98 MS: 1 InsertByte- 00:07:44.485 [2024-12-15 10:45:33.280168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.485 [2024-12-15 10:45:33.280193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.485 [2024-12-15 10:45:33.280242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.485 [2024-12-15 10:45:33.280255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.485 [2024-12-15 10:45:33.280305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.485 [2024-12-15 10:45:33.280319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.485 [2024-12-15 10:45:33.280368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.485 [2024-12-15 10:45:33.280382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.485 #43 NEW cov: 11855 ft: 13920 corp: 22/1705b lim: 100 exec/s: 43 rss: 70Mb L: 96/98 MS: 1 InsertRepeatedBytes- 00:07:44.485 [2024-12-15 10:45:33.310260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.485 [2024-12-15 10:45:33.310285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.485 [2024-12-15 10:45:33.310333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.485 [2024-12-15 10:45:33.310346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.485 [2024-12-15 10:45:33.310397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.485 [2024-12-15 10:45:33.310410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.485 [2024-12-15 10:45:33.310470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.485 [2024-12-15 10:45:33.310484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.485 #44 NEW cov: 11855 ft: 13935 corp: 23/1791b lim: 100 exec/s: 44 rss: 70Mb L: 86/98 MS: 1 PersAutoDict- DE: "\000\000\000\010"- 00:07:44.485 [2024-12-15 10:45:33.350150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.485 [2024-12-15 10:45:33.350175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.485 [2024-12-15 10:45:33.350220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.485 [2024-12-15 10:45:33.350234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.485 #45 NEW cov: 11855 ft: 13981 corp: 24/1845b lim: 100 exec/s: 45 rss: 70Mb L: 54/98 MS: 1 EraseBytes- 00:07:44.485 [2024-12-15 10:45:33.390498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.485 [2024-12-15 10:45:33.390523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.485 [2024-12-15 10:45:33.390576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.485 [2024-12-15 10:45:33.390594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.485 [2024-12-15 10:45:33.390645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.485 [2024-12-15 10:45:33.390658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.485 [2024-12-15 10:45:33.390710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.485 [2024-12-15 10:45:33.390724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.485 #46 NEW cov: 11855 ft: 13986 corp: 25/1939b lim: 100 exec/s: 46 rss: 70Mb L: 94/98 MS: 1 CopyPart- 00:07:44.485 [2024-12-15 10:45:33.430398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.485 [2024-12-15 10:45:33.430428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.485 [2024-12-15 10:45:33.430480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.485 [2024-12-15 10:45:33.430493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.485 #49 NEW cov: 11855 ft: 13994 corp: 26/1992b lim: 100 exec/s: 49 rss: 70Mb L: 53/98 MS: 3 CrossOver-EraseBytes-InsertRepeatedBytes- 00:07:44.485 [2024-12-15 10:45:33.470710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.485 [2024-12-15 10:45:33.470735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.485 [2024-12-15 10:45:33.470786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.485 [2024-12-15 10:45:33.470800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.485 [2024-12-15 10:45:33.470852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.485 [2024-12-15 10:45:33.470866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.485 [2024-12-15 10:45:33.470917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.485 [2024-12-15 10:45:33.470932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.485 #50 NEW cov: 11855 ft: 14041 corp: 27/2076b lim: 100 exec/s: 50 rss: 70Mb L: 84/98 MS: 1 ChangeBit- 00:07:44.745 [2024-12-15 10:45:33.510739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.745 [2024-12-15 10:45:33.510764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.745 [2024-12-15 10:45:33.510812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.745 [2024-12-15 10:45:33.510825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.745 [2024-12-15 10:45:33.510879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.745 [2024-12-15 10:45:33.510893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.745 #51 NEW cov: 11855 ft: 14279 corp: 28/2154b lim: 100 exec/s: 51 rss: 70Mb L: 78/98 MS: 1 EraseBytes- 00:07:44.745 [2024-12-15 10:45:33.550955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.745 [2024-12-15 10:45:33.550981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.745 [2024-12-15 10:45:33.551025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.745 [2024-12-15 10:45:33.551039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.745 [2024-12-15 10:45:33.551090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.745 [2024-12-15 10:45:33.551103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.745 [2024-12-15 10:45:33.551131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.745 [2024-12-15 10:45:33.551145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.745 #52 NEW cov: 11855 ft: 14280 corp: 29/2247b lim: 100 exec/s: 52 rss: 70Mb L: 93/98 MS: 1 CMP- DE: "\377\007"- 00:07:44.745 [2024-12-15 10:45:33.581056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.745 [2024-12-15 10:45:33.581081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.745 [2024-12-15 10:45:33.581135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.745 [2024-12-15 10:45:33.581149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.745 [2024-12-15 10:45:33.581199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.745 [2024-12-15 10:45:33.581212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.745 [2024-12-15 10:45:33.581263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.745 [2024-12-15 10:45:33.581277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.745 #53 NEW cov: 11855 ft: 14282 corp: 30/2335b lim: 100 exec/s: 53 rss: 70Mb L: 88/98 MS: 1 CopyPart- 00:07:44.745 [2024-12-15 10:45:33.621181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.745 [2024-12-15 10:45:33.621205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.745 [2024-12-15 10:45:33.621257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.745 [2024-12-15 10:45:33.621270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.745 [2024-12-15 10:45:33.621321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.745 [2024-12-15 10:45:33.621351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.745 [2024-12-15 10:45:33.621404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.745 [2024-12-15 10:45:33.621421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.745 #54 NEW cov: 11855 ft: 14318 corp: 31/2426b lim: 100 exec/s: 54 rss: 70Mb L: 91/98 MS: 1 ChangeByte- 00:07:44.745 [2024-12-15 10:45:33.661277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.745 [2024-12-15 10:45:33.661301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.745 [2024-12-15 10:45:33.661348] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.745 [2024-12-15 10:45:33.661362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.745 [2024-12-15 10:45:33.661421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.745 [2024-12-15 10:45:33.661435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.745 [2024-12-15 10:45:33.661486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.745 [2024-12-15 10:45:33.661500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.745 #55 NEW cov: 11855 ft: 14333 corp: 32/2517b lim: 100 exec/s: 55 rss: 70Mb L: 91/98 MS: 1 ShuffleBytes- 00:07:44.745 [2024-12-15 10:45:33.701282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.745 [2024-12-15 10:45:33.701307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.746 [2024-12-15 10:45:33.701354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.746 [2024-12-15 10:45:33.701368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.746 [2024-12-15 10:45:33.701421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.746 [2024-12-15 10:45:33.701436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.746 #56 NEW cov: 11855 ft: 14339 corp: 33/2596b lim: 100 exec/s: 56 rss: 70Mb L: 79/98 MS: 1 InsertByte- 00:07:44.746 [2024-12-15 10:45:33.741532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.746 [2024-12-15 10:45:33.741557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.746 [2024-12-15 10:45:33.741608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.746 [2024-12-15 10:45:33.741622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.746 [2024-12-15 10:45:33.741674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.746 [2024-12-15 10:45:33.741688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.746 [2024-12-15 10:45:33.741740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.746 [2024-12-15 10:45:33.741755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:45.005 #57 NEW cov: 11855 ft: 14348 corp: 34/2687b lim: 100 exec/s: 57 rss: 70Mb L: 91/98 MS: 1 ChangeBit- 00:07:45.005 [2024-12-15 10:45:33.781485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:45.005 [2024-12-15 10:45:33.781511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.005 [2024-12-15 10:45:33.781549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:45.005 [2024-12-15 10:45:33.781564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.005 [2024-12-15 10:45:33.781615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:45.005 [2024-12-15 10:45:33.781629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.005 #58 NEW cov: 11855 ft: 14353 corp: 35/2766b lim: 100 exec/s: 58 rss: 70Mb L: 79/98 MS: 1 ChangeByte- 00:07:45.005 [2024-12-15 10:45:33.821748] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:45.005 [2024-12-15 10:45:33.821773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.005 [2024-12-15 10:45:33.821833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:45.005 [2024-12-15 10:45:33.821847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.005 [2024-12-15 10:45:33.821900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:45.005 [2024-12-15 10:45:33.821913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.005 [2024-12-15 10:45:33.821966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:45.005 [2024-12-15 10:45:33.821980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:45.005 #59 NEW cov: 11855 ft: 14370 corp: 36/2860b lim: 100 exec/s: 59 rss: 70Mb L: 94/98 MS: 1 InsertRepeatedBytes- 00:07:45.005 [2024-12-15 10:45:33.861831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:45.005 [2024-12-15 10:45:33.861857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.005 [2024-12-15 10:45:33.861907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:45.005 [2024-12-15 10:45:33.861921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.005 [2024-12-15 10:45:33.861974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:45.005 [2024-12-15 10:45:33.861988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.005 [2024-12-15 10:45:33.862041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:45.005 [2024-12-15 10:45:33.862054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:45.006 #60 NEW cov: 11855 ft: 14383 corp: 37/2959b lim: 100 exec/s: 60 rss: 70Mb L: 99/99 MS: 1 CMP- DE: "\002\000\000\000\000\000\000\000"- 00:07:45.006 [2024-12-15 10:45:33.901957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:45.006 [2024-12-15 10:45:33.901981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.006 [2024-12-15 10:45:33.902039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:45.006 [2024-12-15 10:45:33.902053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.006 [2024-12-15 10:45:33.902102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:45.006 [2024-12-15 10:45:33.902116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.006 [2024-12-15 10:45:33.902169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:45.006 [2024-12-15 10:45:33.902183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:45.006 #61 NEW cov: 11855 ft: 14411 corp: 38/3050b lim: 100 exec/s: 61 rss: 70Mb L: 91/99 MS: 1 ChangeBinInt- 00:07:45.006 [2024-12-15 10:45:33.941939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:45.006 [2024-12-15 10:45:33.941964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.006 [2024-12-15 10:45:33.942012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:45.006 [2024-12-15 10:45:33.942026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.006 [2024-12-15 10:45:33.942080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:45.006 [2024-12-15 10:45:33.942093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.006 #62 NEW cov: 11855 ft: 14417 corp: 39/3123b lim: 100 exec/s: 62 rss: 70Mb L: 73/99 MS: 1 EraseBytes- 00:07:45.006 [2024-12-15 10:45:33.982194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:45.006 [2024-12-15 10:45:33.982219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.006 [2024-12-15 10:45:33.982256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:45.006 [2024-12-15 10:45:33.982270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.006 [2024-12-15 10:45:33.982325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:45.006 [2024-12-15 10:45:33.982337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.006 [2024-12-15 10:45:33.982391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:45.006 [2024-12-15 10:45:33.982406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:45.006 #63 NEW cov: 11855 ft: 14432 corp: 40/3222b lim: 100 exec/s: 63 rss: 70Mb L: 99/99 MS: 1 PersAutoDict- DE: "\002\000\000\000\000\000\000\000"- 00:07:45.006 [2024-12-15 10:45:34.012125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:45.006 [2024-12-15 10:45:34.012150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.006 [2024-12-15 10:45:34.012215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:45.006 [2024-12-15 10:45:34.012230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.006 [2024-12-15 10:45:34.012286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:45.006 [2024-12-15 10:45:34.012302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.265 #64 NEW cov: 11855 ft: 14441 corp: 41/3300b lim: 100 exec/s: 64 rss: 70Mb L: 78/99 MS: 1 CopyPart- 00:07:45.265 [2024-12-15 10:45:34.052168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:45.265 [2024-12-15 10:45:34.052192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.265 [2024-12-15 10:45:34.052257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:45.265 [2024-12-15 10:45:34.052270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.265 #65 NEW cov: 11855 ft: 14458 corp: 42/3353b lim: 100 exec/s: 65 rss: 70Mb L: 53/99 MS: 1 PersAutoDict- DE: "\002\000\000\000\000\000\000\000"- 00:07:45.265 [2024-12-15 10:45:34.092269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:45.265 [2024-12-15 10:45:34.092293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.265 [2024-12-15 10:45:34.092344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:45.265 [2024-12-15 10:45:34.092357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.265 #66 NEW cov: 11855 ft: 14463 corp: 43/3406b lim: 100 exec/s: 66 rss: 70Mb L: 53/99 MS: 1 ChangeBit- 00:07:45.265 [2024-12-15 10:45:34.132669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:45.265 [2024-12-15 10:45:34.132694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.265 [2024-12-15 10:45:34.132758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:45.265 [2024-12-15 10:45:34.132772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.265 [2024-12-15 10:45:34.132829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:45.265 [2024-12-15 10:45:34.132842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.266 [2024-12-15 10:45:34.132894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:45.266 [2024-12-15 10:45:34.132909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:45.266 #67 NEW cov: 11855 ft: 14467 corp: 44/3490b lim: 100 exec/s: 67 rss: 70Mb L: 84/99 MS: 1 ChangeByte- 00:07:45.266 [2024-12-15 10:45:34.172757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:45.266 [2024-12-15 10:45:34.172783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.266 [2024-12-15 10:45:34.172848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:45.266 [2024-12-15 10:45:34.172862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.266 [2024-12-15 10:45:34.172914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:45.266 [2024-12-15 10:45:34.172929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.266 [2024-12-15 10:45:34.172983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:45.266 [2024-12-15 10:45:34.172998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:45.266 #68 NEW cov: 11855 ft: 14496 corp: 45/3574b lim: 100 exec/s: 34 rss: 70Mb L: 84/99 MS: 1 CopyPart- 00:07:45.266 #68 DONE cov: 11855 ft: 14496 corp: 45/3574b lim: 100 exec/s: 34 rss: 70Mb 00:07:45.266 ###### Recommended dictionary. ###### 00:07:45.266 "\012\000" # Uses: 0 00:07:45.266 "\000\000\000\010" # Uses: 1 00:07:45.266 "\377\007" # Uses: 0 00:07:45.266 "\002\000\000\000\000\000\000\000" # Uses: 2 00:07:45.266 ###### End of recommended dictionary. ###### 00:07:45.266 Done 68 runs in 2 second(s) 00:07:45.525 10:45:34 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_18.conf 00:07:45.525 10:45:34 -- ../common.sh@72 -- # (( i++ )) 00:07:45.525 10:45:34 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:45.525 10:45:34 -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:45.525 10:45:34 -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:45.525 10:45:34 -- nvmf/run.sh@24 -- # local timen=1 00:07:45.525 10:45:34 -- nvmf/run.sh@25 -- # local core=0x1 00:07:45.525 10:45:34 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:45.525 10:45:34 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:45.525 10:45:34 -- nvmf/run.sh@29 -- # printf %02d 19 00:07:45.525 10:45:34 -- nvmf/run.sh@29 -- # port=4419 00:07:45.525 10:45:34 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:45.525 10:45:34 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:45.525 10:45:34 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:45.525 10:45:34 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 -r /var/tmp/spdk19.sock 00:07:45.525 [2024-12-15 10:45:34.356923] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.525 [2024-12-15 10:45:34.356992] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1315277 ] 00:07:45.525 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.784 [2024-12-15 10:45:34.610371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.784 [2024-12-15 10:45:34.692105] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:45.784 [2024-12-15 10:45:34.692245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.784 [2024-12-15 10:45:34.750183] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.784 [2024-12-15 10:45:34.766514] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:45.784 INFO: Running with entropic power schedule (0xFF, 100). 00:07:45.784 INFO: Seed: 2386277732 00:07:45.784 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:45.784 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:45.784 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:45.784 INFO: A corpus is not provided, starting from an empty corpus 00:07:45.784 #2 INITED exec/s: 0 rss: 60Mb 00:07:45.784 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:45.784 This may also happen if the target rejected all inputs we tried so far 00:07:46.044 [2024-12-15 10:45:34.814236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:46.044 [2024-12-15 10:45:34.814271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.044 [2024-12-15 10:45:34.814319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2314885530818453536 len:8225 00:07:46.044 [2024-12-15 10:45:34.814337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.044 [2024-12-15 10:45:34.814365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2314885668257407008 len:1 00:07:46.044 [2024-12-15 10:45:34.814382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.315 NEW_FUNC[1/670]: 0x45ae18 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:46.315 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:46.315 #6 NEW cov: 11606 ft: 11607 corp: 2/34b lim: 50 exec/s: 0 rss: 68Mb L: 33/33 MS: 4 CMP-EraseBytes-CopyPart-InsertRepeatedBytes- DE: "@\000\000\000"- 00:07:46.315 [2024-12-15 10:45:35.134955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:46.315 [2024-12-15 10:45:35.134992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.315 [2024-12-15 10:45:35.135041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2314885530818453536 len:8193 00:07:46.315 [2024-12-15 10:45:35.135058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.315 #7 NEW cov: 11719 ft: 12272 corp: 3/54b lim: 50 exec/s: 0 rss: 69Mb L: 20/33 MS: 1 EraseBytes- 00:07:46.315 [2024-12-15 10:45:35.205112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:46.315 [2024-12-15 10:45:35.205147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.315 [2024-12-15 10:45:35.205179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2314885530818453536 len:8225 00:07:46.315 [2024-12-15 10:45:35.205197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.315 [2024-12-15 10:45:35.205226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2314893364838801440 len:1 00:07:46.315 [2024-12-15 10:45:35.205242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.315 #13 NEW cov: 11725 ft: 12697 corp: 4/87b lim: 50 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 ChangeByte- 00:07:46.315 [2024-12-15 10:45:35.255218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:46.315 [2024-12-15 10:45:35.255246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.315 [2024-12-15 10:45:35.255293] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9042521606848512 len:8225 00:07:46.315 [2024-12-15 10:45:35.255311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.315 [2024-12-15 10:45:35.255339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2314893364838801440 len:1 00:07:46.315 [2024-12-15 10:45:35.255355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.315 #14 NEW cov: 11810 ft: 12992 corp: 5/120b lim: 50 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 PersAutoDict- DE: "@\000\000\000"- 00:07:46.315 [2024-12-15 10:45:35.325393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:46.315 [2024-12-15 10:45:35.325429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.315 [2024-12-15 10:45:35.325463] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2314885530818453536 len:8375 00:07:46.315 [2024-12-15 10:45:35.325481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.577 #15 NEW cov: 11810 ft: 13095 corp: 6/148b lim: 50 exec/s: 0 rss: 69Mb L: 28/33 MS: 1 InsertRepeatedBytes- 00:07:46.577 [2024-12-15 10:45:35.395546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:24609 00:07:46.577 [2024-12-15 10:45:35.395574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.577 [2024-12-15 10:45:35.395621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2314885530818453536 len:8375 00:07:46.577 [2024-12-15 10:45:35.395638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.577 #21 NEW cov: 11810 ft: 13144 corp: 7/176b lim: 50 exec/s: 0 rss: 69Mb L: 28/33 MS: 1 ChangeBit- 00:07:46.577 [2024-12-15 10:45:35.465806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:46.577 [2024-12-15 10:45:35.465834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.577 [2024-12-15 10:45:35.465880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9042521606848512 len:8225 00:07:46.577 [2024-12-15 10:45:35.465897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.577 [2024-12-15 10:45:35.465930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2314893364838801440 len:1 00:07:46.577 [2024-12-15 10:45:35.465946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.577 #22 NEW cov: 11810 ft: 13213 corp: 8/209b lim: 50 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 ShuffleBytes- 00:07:46.577 [2024-12-15 10:45:35.525948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:46.577 [2024-12-15 10:45:35.525977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.577 [2024-12-15 10:45:35.526022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9042521606848512 len:8225 00:07:46.577 [2024-12-15 10:45:35.526040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.577 [2024-12-15 10:45:35.526069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2314893364838801440 len:1 00:07:46.577 [2024-12-15 10:45:35.526085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.577 #28 NEW cov: 11810 ft: 13301 corp: 9/242b lim: 50 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 CopyPart- 00:07:46.577 [2024-12-15 10:45:35.586057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:46.577 [2024-12-15 10:45:35.586088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.836 #29 NEW cov: 11810 ft: 13639 corp: 10/260b lim: 50 exec/s: 0 rss: 69Mb L: 18/33 MS: 1 EraseBytes- 00:07:46.836 [2024-12-15 10:45:35.646220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:46.836 [2024-12-15 10:45:35.646249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.836 [2024-12-15 10:45:35.646282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2850072119214080 len:8225 00:07:46.836 [2024-12-15 10:45:35.646299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.836 #30 NEW cov: 11810 ft: 13700 corp: 11/281b lim: 50 exec/s: 0 rss: 69Mb L: 21/33 MS: 1 CrossOver- 00:07:46.836 [2024-12-15 10:45:35.696322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:46.836 [2024-12-15 10:45:35.696351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.836 [2024-12-15 10:45:35.696397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9042521605226496 len:8225 00:07:46.836 [2024-12-15 10:45:35.696420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.836 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:46.836 #31 NEW cov: 11827 ft: 13799 corp: 12/310b lim: 50 exec/s: 0 rss: 69Mb L: 29/33 MS: 1 CrossOver- 00:07:46.836 [2024-12-15 10:45:35.766594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:46.836 [2024-12-15 10:45:35.766622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.836 [2024-12-15 10:45:35.766669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2314885530818455584 len:8193 00:07:46.836 [2024-12-15 10:45:35.766690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.836 #32 NEW cov: 11827 ft: 13824 corp: 13/330b lim: 50 exec/s: 32 rss: 70Mb L: 20/33 MS: 1 ChangeBit- 00:07:46.836 [2024-12-15 10:45:35.816719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314850208638312480 len:1 00:07:46.836 [2024-12-15 10:45:35.816748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.836 [2024-12-15 10:45:35.816793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:8225 00:07:46.836 [2024-12-15 10:45:35.816811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.836 [2024-12-15 10:45:35.816840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:137977929792 len:8225 00:07:46.836 [2024-12-15 10:45:35.816855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.836 [2024-12-15 10:45:35.816882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2314885530818453536 len:10049 00:07:46.836 [2024-12-15 10:45:35.816898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:47.096 #33 NEW cov: 11827 ft: 14106 corp: 14/375b lim: 50 exec/s: 33 rss: 70Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:07:47.096 [2024-12-15 10:45:35.866775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:47.096 [2024-12-15 10:45:35.866805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.096 [2024-12-15 10:45:35.866851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2850364176990208 len:8225 00:07:47.096 [2024-12-15 10:45:35.866868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.096 #39 NEW cov: 11827 ft: 14142 corp: 15/396b lim: 50 exec/s: 39 rss: 70Mb L: 21/45 MS: 1 ChangeByte- 00:07:47.096 [2024-12-15 10:45:35.926967] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354854 len:8225 00:07:47.096 [2024-12-15 10:45:35.926997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.096 [2024-12-15 10:45:35.927044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9042521605226496 len:8225 00:07:47.096 [2024-12-15 10:45:35.927061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.096 #40 NEW cov: 11827 ft: 14206 corp: 16/425b lim: 50 exec/s: 40 rss: 70Mb L: 29/45 MS: 1 ChangeByte- 00:07:47.096 [2024-12-15 10:45:35.987910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354854 len:8232 00:07:47.096 [2024-12-15 10:45:35.987940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.096 [2024-12-15 10:45:35.987975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2314893365373567008 len:1 00:07:47.096 [2024-12-15 10:45:35.987990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.096 [2024-12-15 10:45:35.988045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2316890902588563488 len:65 00:07:47.096 [2024-12-15 10:45:35.988060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.096 #41 NEW cov: 11827 ft: 14340 corp: 17/459b lim: 50 exec/s: 41 rss: 70Mb L: 34/45 MS: 1 CopyPart- 00:07:47.096 [2024-12-15 10:45:36.028025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:47.096 [2024-12-15 10:45:36.028053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.096 [2024-12-15 10:45:36.028104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9042521606848512 len:57889 00:07:47.096 [2024-12-15 10:45:36.028120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.096 [2024-12-15 10:45:36.028173] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2314893364838801440 len:1 00:07:47.096 [2024-12-15 10:45:36.028189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.096 #42 NEW cov: 11827 ft: 14409 corp: 18/492b lim: 50 exec/s: 42 rss: 70Mb L: 33/45 MS: 1 ChangeBinInt- 00:07:47.096 [2024-12-15 10:45:36.068018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8289 00:07:47.096 [2024-12-15 10:45:36.068045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.096 [2024-12-15 10:45:36.068115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2314885530818453536 len:8193 00:07:47.096 [2024-12-15 10:45:36.068131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.096 #43 NEW cov: 11827 ft: 14438 corp: 19/512b lim: 50 exec/s: 43 rss: 70Mb L: 20/45 MS: 1 ChangeBit- 00:07:47.096 [2024-12-15 10:45:36.108124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:47.096 [2024-12-15 10:45:36.108152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.096 [2024-12-15 10:45:36.108200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9042521606848512 len:8225 00:07:47.096 [2024-12-15 10:45:36.108216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.356 #44 NEW cov: 11827 ft: 14475 corp: 20/537b lim: 50 exec/s: 44 rss: 70Mb L: 25/45 MS: 1 EraseBytes- 00:07:47.356 [2024-12-15 10:45:36.148218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8289 00:07:47.356 [2024-12-15 10:45:36.148246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.356 [2024-12-15 10:45:36.148292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2314885530818453536 len:58913 00:07:47.356 [2024-12-15 10:45:36.148307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.356 #45 NEW cov: 11827 ft: 14522 corp: 21/558b lim: 50 exec/s: 45 rss: 70Mb L: 21/45 MS: 1 InsertByte- 00:07:47.356 [2024-12-15 10:45:36.188642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:47.356 [2024-12-15 10:45:36.188669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.356 [2024-12-15 10:45:36.188740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2314885530818453536 len:8225 00:07:47.356 [2024-12-15 10:45:36.188756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.356 [2024-12-15 10:45:36.188826] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2314893364838801440 len:1 00:07:47.356 [2024-12-15 10:45:36.188842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.356 [2024-12-15 10:45:36.188894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2314885531355324448 len:8225 00:07:47.356 [2024-12-15 10:45:36.188910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:47.356 #46 NEW cov: 11827 ft: 14551 corp: 22/600b lim: 50 exec/s: 46 rss: 70Mb L: 42/45 MS: 1 CrossOver- 00:07:47.356 [2024-12-15 10:45:36.228478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:47.356 [2024-12-15 10:45:36.228505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.356 [2024-12-15 10:45:36.228547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9042521605230592 len:8225 00:07:47.356 [2024-12-15 10:45:36.228564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.356 #47 NEW cov: 11827 ft: 14565 corp: 23/629b lim: 50 exec/s: 47 rss: 70Mb L: 29/45 MS: 1 ChangeBit- 00:07:47.356 [2024-12-15 10:45:36.268479] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:47.356 [2024-12-15 10:45:36.268506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.356 #48 NEW cov: 11827 ft: 14581 corp: 24/647b lim: 50 exec/s: 48 rss: 70Mb L: 18/45 MS: 1 EraseBytes- 00:07:47.356 [2024-12-15 10:45:36.308941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2316011430356197408 len:8225 00:07:47.356 [2024-12-15 10:45:36.308970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.356 [2024-12-15 10:45:36.309007] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2314885530818453536 len:8225 00:07:47.356 [2024-12-15 10:45:36.309023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.356 [2024-12-15 10:45:36.309076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2314893364838801440 len:1 00:07:47.356 [2024-12-15 10:45:36.309091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.356 [2024-12-15 10:45:36.309143] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2314885531355324448 len:8225 00:07:47.356 [2024-12-15 10:45:36.309158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:47.356 #49 NEW cov: 11827 ft: 14641 corp: 25/689b lim: 50 exec/s: 49 rss: 70Mb L: 42/45 MS: 1 ChangeBit- 00:07:47.356 [2024-12-15 10:45:36.348850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2316890902217367584 len:33 00:07:47.356 [2024-12-15 10:45:36.348877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.356 [2024-12-15 10:45:36.348915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2828260566527647776 len:16385 00:07:47.356 [2024-12-15 10:45:36.348929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.615 #50 NEW cov: 11827 ft: 14662 corp: 26/712b lim: 50 exec/s: 50 rss: 70Mb L: 23/45 MS: 1 EraseBytes- 00:07:47.616 [2024-12-15 10:45:36.388979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:47.616 [2024-12-15 10:45:36.389009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.616 [2024-12-15 10:45:36.389063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2850072117592064 len:8225 00:07:47.616 [2024-12-15 10:45:36.389078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.616 #51 NEW cov: 11827 ft: 14677 corp: 27/741b lim: 50 exec/s: 51 rss: 70Mb L: 29/45 MS: 1 CrossOver- 00:07:47.616 [2024-12-15 10:45:36.429059] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354854 len:8225 00:07:47.616 [2024-12-15 10:45:36.429085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.616 [2024-12-15 10:45:36.429137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9007337233137664 len:8225 00:07:47.616 [2024-12-15 10:45:36.429152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.616 #52 NEW cov: 11827 ft: 14690 corp: 28/770b lim: 50 exec/s: 52 rss: 70Mb L: 29/45 MS: 1 ChangeBit- 00:07:47.616 [2024-12-15 10:45:36.469307] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6566283578687103078 len:8232 00:07:47.616 [2024-12-15 10:45:36.469334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.616 [2024-12-15 10:45:36.469369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2314893365373567008 len:1 00:07:47.616 [2024-12-15 10:45:36.469386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.616 [2024-12-15 10:45:36.469443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2316890902588563488 len:65 00:07:47.616 [2024-12-15 10:45:36.469459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.616 #53 NEW cov: 11827 ft: 14705 corp: 29/804b lim: 50 exec/s: 53 rss: 70Mb L: 34/45 MS: 1 ChangeByte- 00:07:47.616 [2024-12-15 10:45:36.509533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:47.616 [2024-12-15 10:45:36.509559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.616 [2024-12-15 10:45:36.509612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4611686293844271143 len:65 00:07:47.616 [2024-12-15 10:45:36.509627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.616 [2024-12-15 10:45:36.509678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11133094209344 len:8225 00:07:47.616 [2024-12-15 10:45:36.509694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.616 [2024-12-15 10:45:36.509749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2314885668257407008 len:8193 00:07:47.616 [2024-12-15 10:45:36.509764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:47.616 #54 NEW cov: 11827 ft: 14773 corp: 30/844b lim: 50 exec/s: 54 rss: 70Mb L: 40/45 MS: 1 CrossOver- 00:07:47.616 [2024-12-15 10:45:36.549562] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:47.616 [2024-12-15 10:45:36.549589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.616 [2024-12-15 10:45:36.549649] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:47.616 [2024-12-15 10:45:36.549665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.616 [2024-12-15 10:45:36.549717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4611686022722354976 len:2593 00:07:47.616 [2024-12-15 10:45:36.549733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.616 #55 NEW cov: 11827 ft: 14782 corp: 31/878b lim: 50 exec/s: 55 rss: 70Mb L: 34/45 MS: 1 InsertRepeatedBytes- 00:07:47.616 [2024-12-15 10:45:36.589548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530447257632 len:8225 00:07:47.616 [2024-12-15 10:45:36.589575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.616 [2024-12-15 10:45:36.589645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2314885530818453536 len:8193 00:07:47.616 [2024-12-15 10:45:36.589660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.616 #56 NEW cov: 11827 ft: 14787 corp: 32/898b lim: 50 exec/s: 56 rss: 70Mb L: 20/45 MS: 1 ChangeBit- 00:07:47.876 [2024-12-15 10:45:36.629904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885667888308256 len:1 00:07:47.876 [2024-12-15 10:45:36.629932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.876 [2024-12-15 10:45:36.629979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2314885530281582624 len:8225 00:07:47.876 [2024-12-15 10:45:36.629994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.876 [2024-12-15 10:45:36.630047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:70369283155776 len:8225 00:07:47.876 [2024-12-15 10:45:36.630061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.876 [2024-12-15 10:45:36.630115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:9042521605226496 len:8225 00:07:47.876 [2024-12-15 10:45:36.630130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:47.876 #57 NEW cov: 11827 ft: 14803 corp: 33/947b lim: 50 exec/s: 57 rss: 70Mb L: 49/49 MS: 1 CrossOver- 00:07:47.876 [2024-12-15 10:45:36.669914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:47.876 [2024-12-15 10:45:36.669941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.876 [2024-12-15 10:45:36.669978] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2314885530818453536 len:8225 00:07:47.876 [2024-12-15 10:45:36.669993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.876 [2024-12-15 10:45:36.670047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2314885668257407008 len:1 00:07:47.876 [2024-12-15 10:45:36.670062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.876 #58 NEW cov: 11827 ft: 14809 corp: 34/980b lim: 50 exec/s: 58 rss: 70Mb L: 33/49 MS: 1 ShuffleBytes- 00:07:47.876 [2024-12-15 10:45:36.709895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:47.876 [2024-12-15 10:45:36.709922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.876 [2024-12-15 10:45:36.709975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9043337649012736 len:8225 00:07:47.876 [2024-12-15 10:45:36.709991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.876 #59 NEW cov: 11834 ft: 14843 corp: 35/1009b lim: 50 exec/s: 59 rss: 70Mb L: 29/49 MS: 1 ChangeBinInt- 00:07:47.876 [2024-12-15 10:45:36.740021] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:47.876 [2024-12-15 10:45:36.740048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.876 [2024-12-15 10:45:36.740101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9042521606848512 len:16385 00:07:47.876 [2024-12-15 10:45:36.740116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.876 #60 NEW cov: 11834 ft: 14857 corp: 36/1030b lim: 50 exec/s: 60 rss: 70Mb L: 21/49 MS: 1 EraseBytes- 00:07:47.876 [2024-12-15 10:45:36.780039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2314885530449354784 len:8225 00:07:47.876 [2024-12-15 10:45:36.780066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.876 #61 NEW cov: 11834 ft: 14886 corp: 37/1043b lim: 50 exec/s: 30 rss: 70Mb L: 13/49 MS: 1 EraseBytes- 00:07:47.876 #61 DONE cov: 11834 ft: 14886 corp: 37/1043b lim: 50 exec/s: 30 rss: 70Mb 00:07:47.876 ###### Recommended dictionary. ###### 00:07:47.876 "@\000\000\000" # Uses: 2 00:07:47.876 ###### End of recommended dictionary. ###### 00:07:47.876 Done 61 runs in 2 second(s) 00:07:48.135 10:45:36 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_19.conf 00:07:48.135 10:45:36 -- ../common.sh@72 -- # (( i++ )) 00:07:48.135 10:45:36 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:48.135 10:45:36 -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:48.135 10:45:36 -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:48.135 10:45:36 -- nvmf/run.sh@24 -- # local timen=1 00:07:48.135 10:45:36 -- nvmf/run.sh@25 -- # local core=0x1 00:07:48.135 10:45:36 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:48.135 10:45:36 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:48.135 10:45:36 -- nvmf/run.sh@29 -- # printf %02d 20 00:07:48.135 10:45:36 -- nvmf/run.sh@29 -- # port=4420 00:07:48.135 10:45:36 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:48.135 10:45:36 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:48.135 10:45:36 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:48.135 10:45:36 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 -r /var/tmp/spdk20.sock 00:07:48.135 [2024-12-15 10:45:36.966862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.135 [2024-12-15 10:45:36.966934] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1315820 ] 00:07:48.135 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.394 [2024-12-15 10:45:37.222322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.394 [2024-12-15 10:45:37.310756] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:48.394 [2024-12-15 10:45:37.310881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.394 [2024-12-15 10:45:37.368862] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.394 [2024-12-15 10:45:37.385163] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:48.394 INFO: Running with entropic power schedule (0xFF, 100). 00:07:48.394 INFO: Seed: 710287909 00:07:48.654 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:48.654 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:48.654 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:48.654 INFO: A corpus is not provided, starting from an empty corpus 00:07:48.654 #2 INITED exec/s: 0 rss: 60Mb 00:07:48.654 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:48.654 This may also happen if the target rejected all inputs we tried so far 00:07:48.654 [2024-12-15 10:45:37.455592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.654 [2024-12-15 10:45:37.455635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.654 [2024-12-15 10:45:37.455691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:48.654 [2024-12-15 10:45:37.455710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.913 NEW_FUNC[1/672]: 0x45c9d8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:48.913 NEW_FUNC[2/672]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:48.913 #6 NEW cov: 11663 ft: 11664 corp: 2/43b lim: 90 exec/s: 0 rss: 68Mb L: 42/42 MS: 4 ChangeBit-InsertByte-ChangeByte-InsertRepeatedBytes- 00:07:48.913 [2024-12-15 10:45:37.775374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.913 [2024-12-15 10:45:37.775431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.913 #8 NEW cov: 11777 ft: 13049 corp: 3/73b lim: 90 exec/s: 0 rss: 68Mb L: 30/42 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:48.913 [2024-12-15 10:45:37.815899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.913 [2024-12-15 10:45:37.815931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.913 [2024-12-15 10:45:37.816042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:48.913 [2024-12-15 10:45:37.816060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.913 [2024-12-15 10:45:37.816171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:48.913 [2024-12-15 10:45:37.816192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.913 #9 NEW cov: 11783 ft: 13657 corp: 4/139b lim: 90 exec/s: 0 rss: 69Mb L: 66/66 MS: 1 CopyPart- 00:07:48.913 [2024-12-15 10:45:37.855616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.913 [2024-12-15 10:45:37.855644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.913 #10 NEW cov: 11868 ft: 14021 corp: 5/170b lim: 90 exec/s: 0 rss: 69Mb L: 31/66 MS: 1 InsertByte- 00:07:48.913 [2024-12-15 10:45:37.905742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.913 [2024-12-15 10:45:37.905769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.172 #11 NEW cov: 11868 ft: 14150 corp: 6/200b lim: 90 exec/s: 0 rss: 69Mb L: 30/66 MS: 1 ShuffleBytes- 00:07:49.172 [2024-12-15 10:45:37.946393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.172 [2024-12-15 10:45:37.946424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.172 [2024-12-15 10:45:37.946515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.172 [2024-12-15 10:45:37.946535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.172 [2024-12-15 10:45:37.946660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:49.172 [2024-12-15 10:45:37.946682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.172 #17 NEW cov: 11868 ft: 14276 corp: 7/259b lim: 90 exec/s: 0 rss: 69Mb L: 59/66 MS: 1 InsertRepeatedBytes- 00:07:49.172 [2024-12-15 10:45:37.985966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.172 [2024-12-15 10:45:37.985997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.172 #19 NEW cov: 11868 ft: 14353 corp: 8/291b lim: 90 exec/s: 0 rss: 69Mb L: 32/66 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:07:49.172 [2024-12-15 10:45:38.026586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.172 [2024-12-15 10:45:38.026619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.172 [2024-12-15 10:45:38.026757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.172 [2024-12-15 10:45:38.026775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.172 [2024-12-15 10:45:38.026892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:49.173 [2024-12-15 10:45:38.026911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.173 #25 NEW cov: 11868 ft: 14406 corp: 9/351b lim: 90 exec/s: 0 rss: 69Mb L: 60/66 MS: 1 InsertByte- 00:07:49.173 [2024-12-15 10:45:38.066235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.173 [2024-12-15 10:45:38.066260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.173 #26 NEW cov: 11868 ft: 14441 corp: 10/383b lim: 90 exec/s: 0 rss: 69Mb L: 32/66 MS: 1 ShuffleBytes- 00:07:49.173 [2024-12-15 10:45:38.106354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.173 [2024-12-15 10:45:38.106379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.173 #27 NEW cov: 11868 ft: 14533 corp: 11/414b lim: 90 exec/s: 0 rss: 69Mb L: 31/66 MS: 1 CrossOver- 00:07:49.173 [2024-12-15 10:45:38.147092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.173 [2024-12-15 10:45:38.147122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.173 [2024-12-15 10:45:38.147243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.173 [2024-12-15 10:45:38.147263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.173 [2024-12-15 10:45:38.147380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:49.173 [2024-12-15 10:45:38.147403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.173 #33 NEW cov: 11868 ft: 14590 corp: 12/474b lim: 90 exec/s: 0 rss: 69Mb L: 60/66 MS: 1 InsertByte- 00:07:49.432 [2024-12-15 10:45:38.187337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.432 [2024-12-15 10:45:38.187367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.432 [2024-12-15 10:45:38.187460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.432 [2024-12-15 10:45:38.187482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.432 [2024-12-15 10:45:38.187608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:49.432 [2024-12-15 10:45:38.187632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.432 [2024-12-15 10:45:38.187756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:49.432 [2024-12-15 10:45:38.187780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:49.432 #34 NEW cov: 11868 ft: 14918 corp: 13/559b lim: 90 exec/s: 0 rss: 69Mb L: 85/85 MS: 1 InsertRepeatedBytes- 00:07:49.432 [2024-12-15 10:45:38.227430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.432 [2024-12-15 10:45:38.227462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.432 [2024-12-15 10:45:38.227570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.432 [2024-12-15 10:45:38.227589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.432 [2024-12-15 10:45:38.227706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:49.432 [2024-12-15 10:45:38.227728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.432 [2024-12-15 10:45:38.227851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:49.432 [2024-12-15 10:45:38.227873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:49.432 #35 NEW cov: 11868 ft: 14949 corp: 14/645b lim: 90 exec/s: 0 rss: 69Mb L: 86/86 MS: 1 InsertByte- 00:07:49.432 [2024-12-15 10:45:38.276782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.432 [2024-12-15 10:45:38.276809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.432 #36 NEW cov: 11868 ft: 14983 corp: 15/677b lim: 90 exec/s: 0 rss: 69Mb L: 32/86 MS: 1 ChangeByte- 00:07:49.432 [2024-12-15 10:45:38.317659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.432 [2024-12-15 10:45:38.317693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.432 [2024-12-15 10:45:38.317814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.432 [2024-12-15 10:45:38.317838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.433 [2024-12-15 10:45:38.317957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:49.433 [2024-12-15 10:45:38.317977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.433 [2024-12-15 10:45:38.318096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:49.433 [2024-12-15 10:45:38.318117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:49.433 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:49.433 #37 NEW cov: 11891 ft: 15021 corp: 16/764b lim: 90 exec/s: 0 rss: 69Mb L: 87/87 MS: 1 InsertRepeatedBytes- 00:07:49.433 [2024-12-15 10:45:38.357022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.433 [2024-12-15 10:45:38.357046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.433 #38 NEW cov: 11891 ft: 15064 corp: 17/796b lim: 90 exec/s: 0 rss: 69Mb L: 32/87 MS: 1 ChangeBinInt- 00:07:49.433 [2024-12-15 10:45:38.397147] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.433 [2024-12-15 10:45:38.397174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.433 #39 NEW cov: 11891 ft: 15082 corp: 18/828b lim: 90 exec/s: 0 rss: 70Mb L: 32/87 MS: 1 ChangeBit- 00:07:49.433 [2024-12-15 10:45:38.437781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.433 [2024-12-15 10:45:38.437817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.433 [2024-12-15 10:45:38.437937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.433 [2024-12-15 10:45:38.437960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.433 [2024-12-15 10:45:38.438087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:49.433 [2024-12-15 10:45:38.438112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.692 #40 NEW cov: 11891 ft: 15121 corp: 19/885b lim: 90 exec/s: 40 rss: 70Mb L: 57/87 MS: 1 EraseBytes- 00:07:49.692 [2024-12-15 10:45:38.487486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.692 [2024-12-15 10:45:38.487513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.692 #41 NEW cov: 11891 ft: 15125 corp: 20/915b lim: 90 exec/s: 41 rss: 70Mb L: 30/87 MS: 1 ChangeByte- 00:07:49.692 [2024-12-15 10:45:38.527665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.692 [2024-12-15 10:45:38.527691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.692 #42 NEW cov: 11891 ft: 15131 corp: 21/937b lim: 90 exec/s: 42 rss: 70Mb L: 22/87 MS: 1 EraseBytes- 00:07:49.692 [2024-12-15 10:45:38.578174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.692 [2024-12-15 10:45:38.578205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.692 [2024-12-15 10:45:38.578319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.692 [2024-12-15 10:45:38.578342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.692 [2024-12-15 10:45:38.578461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:49.692 [2024-12-15 10:45:38.578483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.692 #43 NEW cov: 11891 ft: 15156 corp: 22/998b lim: 90 exec/s: 43 rss: 70Mb L: 61/87 MS: 1 CrossOver- 00:07:49.692 [2024-12-15 10:45:38.618292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.692 [2024-12-15 10:45:38.618322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.692 [2024-12-15 10:45:38.618433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.692 [2024-12-15 10:45:38.618451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.692 [2024-12-15 10:45:38.618568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:49.692 [2024-12-15 10:45:38.618586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.692 #44 NEW cov: 11891 ft: 15192 corp: 23/1058b lim: 90 exec/s: 44 rss: 70Mb L: 60/87 MS: 1 CopyPart- 00:07:49.692 [2024-12-15 10:45:38.668053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.692 [2024-12-15 10:45:38.668085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.692 #45 NEW cov: 11891 ft: 15257 corp: 24/1085b lim: 90 exec/s: 45 rss: 70Mb L: 27/87 MS: 1 EraseBytes- 00:07:49.994 [2024-12-15 10:45:38.718201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.994 [2024-12-15 10:45:38.718229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.994 #46 NEW cov: 11891 ft: 15275 corp: 25/1116b lim: 90 exec/s: 46 rss: 70Mb L: 31/87 MS: 1 ShuffleBytes- 00:07:49.994 [2024-12-15 10:45:38.758848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.994 [2024-12-15 10:45:38.758881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.994 [2024-12-15 10:45:38.759002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.994 [2024-12-15 10:45:38.759023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.994 [2024-12-15 10:45:38.759143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:49.994 [2024-12-15 10:45:38.759163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.994 #47 NEW cov: 11891 ft: 15289 corp: 26/1177b lim: 90 exec/s: 47 rss: 70Mb L: 61/87 MS: 1 ChangeBit- 00:07:49.994 [2024-12-15 10:45:38.809182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.994 [2024-12-15 10:45:38.809211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.994 [2024-12-15 10:45:38.809324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.994 [2024-12-15 10:45:38.809354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.994 [2024-12-15 10:45:38.809481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:49.994 [2024-12-15 10:45:38.809502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.994 [2024-12-15 10:45:38.809620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:49.994 [2024-12-15 10:45:38.809640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:49.994 #53 NEW cov: 11891 ft: 15306 corp: 27/1266b lim: 90 exec/s: 53 rss: 70Mb L: 89/89 MS: 1 CMP- DE: "\002\000\000\000"- 00:07:49.994 [2024-12-15 10:45:38.848549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.994 [2024-12-15 10:45:38.848586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.994 #54 NEW cov: 11891 ft: 15318 corp: 28/1300b lim: 90 exec/s: 54 rss: 70Mb L: 34/89 MS: 1 InsertRepeatedBytes- 00:07:49.994 [2024-12-15 10:45:38.899497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.994 [2024-12-15 10:45:38.899529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.994 [2024-12-15 10:45:38.899652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.994 [2024-12-15 10:45:38.899673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.994 [2024-12-15 10:45:38.899788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:49.994 [2024-12-15 10:45:38.899811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.994 [2024-12-15 10:45:38.899932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:49.994 [2024-12-15 10:45:38.899954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:49.994 #60 NEW cov: 11891 ft: 15331 corp: 29/1386b lim: 90 exec/s: 60 rss: 70Mb L: 86/89 MS: 1 PersAutoDict- DE: "\002\000\000\000"- 00:07:49.994 [2024-12-15 10:45:38.939824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.994 [2024-12-15 10:45:38.939853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.994 [2024-12-15 10:45:38.939943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.994 [2024-12-15 10:45:38.939963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.994 [2024-12-15 10:45:38.940075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:49.994 [2024-12-15 10:45:38.940099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.994 [2024-12-15 10:45:38.940211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:49.994 [2024-12-15 10:45:38.940232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:49.994 [2024-12-15 10:45:38.940353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:49.994 [2024-12-15 10:45:38.940376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:49.994 #61 NEW cov: 11891 ft: 15394 corp: 30/1476b lim: 90 exec/s: 61 rss: 70Mb L: 90/90 MS: 1 CrossOver- 00:07:49.994 [2024-12-15 10:45:38.978978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.994 [2024-12-15 10:45:38.979006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.994 #62 NEW cov: 11891 ft: 15414 corp: 31/1507b lim: 90 exec/s: 62 rss: 70Mb L: 31/90 MS: 1 InsertByte- 00:07:50.253 [2024-12-15 10:45:39.019593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:50.253 [2024-12-15 10:45:39.019638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.253 [2024-12-15 10:45:39.019781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:50.253 [2024-12-15 10:45:39.019808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.253 [2024-12-15 10:45:39.019924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:50.253 [2024-12-15 10:45:39.019947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:50.253 #63 NEW cov: 11891 ft: 15426 corp: 32/1565b lim: 90 exec/s: 63 rss: 70Mb L: 58/90 MS: 1 InsertByte- 00:07:50.253 [2024-12-15 10:45:39.069995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:50.253 [2024-12-15 10:45:39.070026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.253 [2024-12-15 10:45:39.070113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:50.253 [2024-12-15 10:45:39.070136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.253 [2024-12-15 10:45:39.070269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:50.253 [2024-12-15 10:45:39.070292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:50.253 [2024-12-15 10:45:39.070418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:50.253 [2024-12-15 10:45:39.070438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:50.253 #64 NEW cov: 11891 ft: 15479 corp: 33/1637b lim: 90 exec/s: 64 rss: 70Mb L: 72/90 MS: 1 InsertRepeatedBytes- 00:07:50.254 [2024-12-15 10:45:39.119905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:50.254 [2024-12-15 10:45:39.119935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.254 [2024-12-15 10:45:39.120032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:50.254 [2024-12-15 10:45:39.120050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.254 [2024-12-15 10:45:39.120166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:50.254 [2024-12-15 10:45:39.120188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:50.254 #65 NEW cov: 11891 ft: 15493 corp: 34/1706b lim: 90 exec/s: 65 rss: 70Mb L: 69/90 MS: 1 CrossOver- 00:07:50.254 [2024-12-15 10:45:39.159541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:50.254 [2024-12-15 10:45:39.159572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.254 #67 NEW cov: 11891 ft: 15554 corp: 35/1735b lim: 90 exec/s: 67 rss: 70Mb L: 29/90 MS: 2 EraseBytes-CopyPart- 00:07:50.254 [2024-12-15 10:45:39.199693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:50.254 [2024-12-15 10:45:39.199725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.254 #68 NEW cov: 11891 ft: 15622 corp: 36/1757b lim: 90 exec/s: 68 rss: 70Mb L: 22/90 MS: 1 ChangeBinInt- 00:07:50.254 [2024-12-15 10:45:39.239732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:50.254 [2024-12-15 10:45:39.239761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.254 #69 NEW cov: 11891 ft: 15673 corp: 37/1787b lim: 90 exec/s: 69 rss: 70Mb L: 30/90 MS: 1 ShuffleBytes- 00:07:50.513 [2024-12-15 10:45:39.280424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:50.513 [2024-12-15 10:45:39.280459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.513 [2024-12-15 10:45:39.280558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:50.513 [2024-12-15 10:45:39.280577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.513 [2024-12-15 10:45:39.280699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:50.513 [2024-12-15 10:45:39.280722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:50.513 #70 NEW cov: 11891 ft: 15683 corp: 38/1845b lim: 90 exec/s: 70 rss: 70Mb L: 58/90 MS: 1 CopyPart- 00:07:50.513 [2024-12-15 10:45:39.320121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:50.513 [2024-12-15 10:45:39.320147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.513 #71 NEW cov: 11891 ft: 15694 corp: 39/1877b lim: 90 exec/s: 71 rss: 70Mb L: 32/90 MS: 1 ShuffleBytes- 00:07:50.513 [2024-12-15 10:45:39.360666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:50.513 [2024-12-15 10:45:39.360697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.514 [2024-12-15 10:45:39.360796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:50.514 [2024-12-15 10:45:39.360812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.514 [2024-12-15 10:45:39.360938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:50.514 [2024-12-15 10:45:39.360957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:50.514 #72 NEW cov: 11891 ft: 15700 corp: 40/1935b lim: 90 exec/s: 72 rss: 70Mb L: 58/90 MS: 1 ShuffleBytes- 00:07:50.514 [2024-12-15 10:45:39.401021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:50.514 [2024-12-15 10:45:39.401052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.514 [2024-12-15 10:45:39.401153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:50.514 [2024-12-15 10:45:39.401176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:50.514 [2024-12-15 10:45:39.401290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:50.514 [2024-12-15 10:45:39.401311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:50.514 [2024-12-15 10:45:39.401422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:50.514 [2024-12-15 10:45:39.401443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:50.514 #73 NEW cov: 11891 ft: 15711 corp: 41/2024b lim: 90 exec/s: 36 rss: 70Mb L: 89/90 MS: 1 InsertRepeatedBytes- 00:07:50.514 #73 DONE cov: 11891 ft: 15711 corp: 41/2024b lim: 90 exec/s: 36 rss: 70Mb 00:07:50.514 ###### Recommended dictionary. ###### 00:07:50.514 "\002\000\000\000" # Uses: 2 00:07:50.514 ###### End of recommended dictionary. ###### 00:07:50.514 Done 73 runs in 2 second(s) 00:07:50.774 10:45:39 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_20.conf 00:07:50.774 10:45:39 -- ../common.sh@72 -- # (( i++ )) 00:07:50.774 10:45:39 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:50.774 10:45:39 -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:50.774 10:45:39 -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:50.774 10:45:39 -- nvmf/run.sh@24 -- # local timen=1 00:07:50.774 10:45:39 -- nvmf/run.sh@25 -- # local core=0x1 00:07:50.774 10:45:39 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:50.774 10:45:39 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:50.774 10:45:39 -- nvmf/run.sh@29 -- # printf %02d 21 00:07:50.774 10:45:39 -- nvmf/run.sh@29 -- # port=4421 00:07:50.774 10:45:39 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:50.774 10:45:39 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:50.774 10:45:39 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:50.774 10:45:39 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 -r /var/tmp/spdk21.sock 00:07:50.774 [2024-12-15 10:45:39.587865] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:50.774 [2024-12-15 10:45:39.587953] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1316359 ] 00:07:50.774 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.774 [2024-12-15 10:45:39.764956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.033 [2024-12-15 10:45:39.828585] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:51.033 [2024-12-15 10:45:39.828714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.033 [2024-12-15 10:45:39.886426] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.033 [2024-12-15 10:45:39.902722] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:51.033 INFO: Running with entropic power schedule (0xFF, 100). 00:07:51.033 INFO: Seed: 3228320712 00:07:51.033 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:51.033 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:51.033 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:51.033 INFO: A corpus is not provided, starting from an empty corpus 00:07:51.034 #2 INITED exec/s: 0 rss: 60Mb 00:07:51.034 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:51.034 This may also happen if the target rejected all inputs we tried so far 00:07:51.034 [2024-12-15 10:45:39.973092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.034 [2024-12-15 10:45:39.973131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.034 [2024-12-15 10:45:39.973217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:51.034 [2024-12-15 10:45:39.973239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.034 [2024-12-15 10:45:39.973351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:51.034 [2024-12-15 10:45:39.973372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.034 [2024-12-15 10:45:39.973491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:51.034 [2024-12-15 10:45:39.973514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:51.293 NEW_FUNC[1/671]: 0x45fc08 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:51.293 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:51.293 #5 NEW cov: 11638 ft: 11640 corp: 2/41b lim: 50 exec/s: 0 rss: 68Mb L: 40/40 MS: 3 CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:07:51.293 [2024-12-15 10:45:40.304055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.293 [2024-12-15 10:45:40.304099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.293 [2024-12-15 10:45:40.304232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:51.293 [2024-12-15 10:45:40.304252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.293 [2024-12-15 10:45:40.304368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:51.293 [2024-12-15 10:45:40.304391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.293 [2024-12-15 10:45:40.304515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:51.293 [2024-12-15 10:45:40.304534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:51.552 NEW_FUNC[1/1]: 0xe94548 in rte_get_timer_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/generic/rte_cycles.h:94 00:07:51.552 #6 NEW cov: 11752 ft: 12044 corp: 3/81b lim: 50 exec/s: 0 rss: 68Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:51.552 [2024-12-15 10:45:40.354072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.552 [2024-12-15 10:45:40.354103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.552 [2024-12-15 10:45:40.354223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:51.552 [2024-12-15 10:45:40.354245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.552 [2024-12-15 10:45:40.354363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:51.552 [2024-12-15 10:45:40.354385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.552 [2024-12-15 10:45:40.354516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:51.552 [2024-12-15 10:45:40.354538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:51.552 #7 NEW cov: 11758 ft: 12360 corp: 4/121b lim: 50 exec/s: 0 rss: 68Mb L: 40/40 MS: 1 CMP- DE: "\000\000\000\014"- 00:07:51.552 [2024-12-15 10:45:40.393439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.552 [2024-12-15 10:45:40.393470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.552 #10 NEW cov: 11843 ft: 13455 corp: 5/134b lim: 50 exec/s: 0 rss: 68Mb L: 13/40 MS: 3 CrossOver-ChangeByte-InsertRepeatedBytes- 00:07:51.552 [2024-12-15 10:45:40.433850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.552 [2024-12-15 10:45:40.433874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.552 [2024-12-15 10:45:40.434007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:51.552 [2024-12-15 10:45:40.434028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.552 #12 NEW cov: 11843 ft: 13997 corp: 6/154b lim: 50 exec/s: 0 rss: 68Mb L: 20/40 MS: 2 CopyPart-CrossOver- 00:07:51.552 [2024-12-15 10:45:40.473907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.552 [2024-12-15 10:45:40.473938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.552 [2024-12-15 10:45:40.474074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:51.552 [2024-12-15 10:45:40.474097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.552 #13 NEW cov: 11843 ft: 14037 corp: 7/180b lim: 50 exec/s: 0 rss: 68Mb L: 26/40 MS: 1 CrossOver- 00:07:51.552 [2024-12-15 10:45:40.514589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.552 [2024-12-15 10:45:40.514625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.552 [2024-12-15 10:45:40.514711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:51.552 [2024-12-15 10:45:40.514732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.552 [2024-12-15 10:45:40.514860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:51.552 [2024-12-15 10:45:40.514880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.552 [2024-12-15 10:45:40.514997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:51.552 [2024-12-15 10:45:40.515019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:51.552 #14 NEW cov: 11843 ft: 14096 corp: 8/220b lim: 50 exec/s: 0 rss: 68Mb L: 40/40 MS: 1 CopyPart- 00:07:51.552 [2024-12-15 10:45:40.554710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.552 [2024-12-15 10:45:40.554740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.552 [2024-12-15 10:45:40.554844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:51.552 [2024-12-15 10:45:40.554867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.552 [2024-12-15 10:45:40.554986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:51.552 [2024-12-15 10:45:40.555009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.552 [2024-12-15 10:45:40.555137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:51.552 [2024-12-15 10:45:40.555159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:51.811 #15 NEW cov: 11843 ft: 14165 corp: 9/264b lim: 50 exec/s: 0 rss: 68Mb L: 44/44 MS: 1 CopyPart- 00:07:51.811 [2024-12-15 10:45:40.604467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.811 [2024-12-15 10:45:40.604494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.811 [2024-12-15 10:45:40.604617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:51.811 [2024-12-15 10:45:40.604637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.811 #16 NEW cov: 11843 ft: 14226 corp: 10/292b lim: 50 exec/s: 0 rss: 68Mb L: 28/44 MS: 1 InsertRepeatedBytes- 00:07:51.811 [2024-12-15 10:45:40.644630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.811 [2024-12-15 10:45:40.644659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.811 [2024-12-15 10:45:40.644743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:51.811 [2024-12-15 10:45:40.644766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.811 [2024-12-15 10:45:40.644888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:51.811 [2024-12-15 10:45:40.644910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.811 [2024-12-15 10:45:40.645026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:51.811 [2024-12-15 10:45:40.645049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:51.811 #17 NEW cov: 11843 ft: 14269 corp: 11/332b lim: 50 exec/s: 0 rss: 68Mb L: 40/44 MS: 1 CopyPart- 00:07:51.812 [2024-12-15 10:45:40.695183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.812 [2024-12-15 10:45:40.695215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.812 [2024-12-15 10:45:40.695329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:51.812 [2024-12-15 10:45:40.695348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.812 [2024-12-15 10:45:40.695475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:51.812 [2024-12-15 10:45:40.695502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.812 [2024-12-15 10:45:40.695621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:51.812 [2024-12-15 10:45:40.695644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:51.812 #22 NEW cov: 11843 ft: 14271 corp: 12/381b lim: 50 exec/s: 0 rss: 68Mb L: 49/49 MS: 5 ShuffleBytes-ChangeByte-InsertRepeatedBytes-ChangeByte-InsertRepeatedBytes- 00:07:51.812 [2024-12-15 10:45:40.734791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.812 [2024-12-15 10:45:40.734824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.812 [2024-12-15 10:45:40.734955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:51.812 [2024-12-15 10:45:40.734975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.812 #25 NEW cov: 11843 ft: 14298 corp: 13/401b lim: 50 exec/s: 0 rss: 68Mb L: 20/49 MS: 3 InsertByte-ChangeBit-CrossOver- 00:07:51.812 [2024-12-15 10:45:40.774956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.812 [2024-12-15 10:45:40.774988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.812 [2024-12-15 10:45:40.775100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:51.812 [2024-12-15 10:45:40.775122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.812 [2024-12-15 10:45:40.775244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:51.812 [2024-12-15 10:45:40.775264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.812 [2024-12-15 10:45:40.775382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:51.812 [2024-12-15 10:45:40.775405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:51.812 #26 NEW cov: 11843 ft: 14366 corp: 14/450b lim: 50 exec/s: 0 rss: 68Mb L: 49/49 MS: 1 ChangeBinInt- 00:07:52.071 [2024-12-15 10:45:40.825491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.071 [2024-12-15 10:45:40.825525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.071 [2024-12-15 10:45:40.825659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.071 [2024-12-15 10:45:40.825683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.071 [2024-12-15 10:45:40.825803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:52.071 [2024-12-15 10:45:40.825825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.071 [2024-12-15 10:45:40.825944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:52.071 [2024-12-15 10:45:40.825965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.071 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:52.071 #27 NEW cov: 11866 ft: 14405 corp: 15/491b lim: 50 exec/s: 0 rss: 69Mb L: 41/49 MS: 1 InsertByte- 00:07:52.071 [2024-12-15 10:45:40.875638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.071 [2024-12-15 10:45:40.875672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.071 [2024-12-15 10:45:40.875783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.071 [2024-12-15 10:45:40.875803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.071 [2024-12-15 10:45:40.875918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:52.071 [2024-12-15 10:45:40.875941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.071 [2024-12-15 10:45:40.876055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:52.071 [2024-12-15 10:45:40.876075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.071 #28 NEW cov: 11866 ft: 14420 corp: 16/531b lim: 50 exec/s: 0 rss: 69Mb L: 40/49 MS: 1 ChangeByte- 00:07:52.071 [2024-12-15 10:45:40.915960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.071 [2024-12-15 10:45:40.915988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.071 [2024-12-15 10:45:40.916072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.071 [2024-12-15 10:45:40.916086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.071 [2024-12-15 10:45:40.916207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:52.071 [2024-12-15 10:45:40.916226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.071 [2024-12-15 10:45:40.916349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:52.071 [2024-12-15 10:45:40.916372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.071 [2024-12-15 10:45:40.916490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:52.072 [2024-12-15 10:45:40.916514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:52.072 #29 NEW cov: 11866 ft: 14498 corp: 17/581b lim: 50 exec/s: 29 rss: 69Mb L: 50/50 MS: 1 CrossOver- 00:07:52.072 [2024-12-15 10:45:40.966190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.072 [2024-12-15 10:45:40.966224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.072 [2024-12-15 10:45:40.966346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.072 [2024-12-15 10:45:40.966363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.072 [2024-12-15 10:45:40.966489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:52.072 [2024-12-15 10:45:40.966514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.072 [2024-12-15 10:45:40.966627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:52.072 [2024-12-15 10:45:40.966646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.072 [2024-12-15 10:45:40.966770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:52.072 [2024-12-15 10:45:40.966792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:52.072 #30 NEW cov: 11866 ft: 14597 corp: 18/631b lim: 50 exec/s: 30 rss: 69Mb L: 50/50 MS: 1 ChangeASCIIInt- 00:07:52.072 [2024-12-15 10:45:41.015668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.072 [2024-12-15 10:45:41.015699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.072 [2024-12-15 10:45:41.015770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.072 [2024-12-15 10:45:41.015788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.072 [2024-12-15 10:45:41.015907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:52.072 [2024-12-15 10:45:41.015929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.072 [2024-12-15 10:45:41.016049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:52.072 [2024-12-15 10:45:41.016071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.072 #31 NEW cov: 11866 ft: 14602 corp: 19/672b lim: 50 exec/s: 31 rss: 69Mb L: 41/50 MS: 1 InsertByte- 00:07:52.072 [2024-12-15 10:45:41.055470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.072 [2024-12-15 10:45:41.055499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.072 #32 NEW cov: 11866 ft: 14660 corp: 20/685b lim: 50 exec/s: 32 rss: 69Mb L: 13/50 MS: 1 ChangeByte- 00:07:52.331 [2024-12-15 10:45:41.095847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.331 [2024-12-15 10:45:41.095879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.331 [2024-12-15 10:45:41.095995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.331 [2024-12-15 10:45:41.096018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.331 #33 NEW cov: 11866 ft: 14681 corp: 21/708b lim: 50 exec/s: 33 rss: 69Mb L: 23/50 MS: 1 EraseBytes- 00:07:52.331 [2024-12-15 10:45:41.136430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.331 [2024-12-15 10:45:41.136462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.331 [2024-12-15 10:45:41.136562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.331 [2024-12-15 10:45:41.136588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.331 [2024-12-15 10:45:41.136710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:52.331 [2024-12-15 10:45:41.136731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.331 [2024-12-15 10:45:41.136850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:52.331 [2024-12-15 10:45:41.136869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.331 #34 NEW cov: 11866 ft: 14754 corp: 22/748b lim: 50 exec/s: 34 rss: 69Mb L: 40/50 MS: 1 PersAutoDict- DE: "\000\000\000\014"- 00:07:52.331 [2024-12-15 10:45:41.186522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.331 [2024-12-15 10:45:41.186554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.331 [2024-12-15 10:45:41.186659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.331 [2024-12-15 10:45:41.186682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.331 [2024-12-15 10:45:41.186792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:52.331 [2024-12-15 10:45:41.186812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.331 [2024-12-15 10:45:41.186939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:52.331 [2024-12-15 10:45:41.186961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.331 #35 NEW cov: 11866 ft: 14812 corp: 23/797b lim: 50 exec/s: 35 rss: 69Mb L: 49/50 MS: 1 CopyPart- 00:07:52.331 [2024-12-15 10:45:41.226719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.331 [2024-12-15 10:45:41.226751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.331 [2024-12-15 10:45:41.226866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.331 [2024-12-15 10:45:41.226889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.331 [2024-12-15 10:45:41.227008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:52.331 [2024-12-15 10:45:41.227028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.331 [2024-12-15 10:45:41.227145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:52.331 [2024-12-15 10:45:41.227167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.331 #36 NEW cov: 11866 ft: 14888 corp: 24/837b lim: 50 exec/s: 36 rss: 69Mb L: 40/50 MS: 1 ShuffleBytes- 00:07:52.331 [2024-12-15 10:45:41.266675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.331 [2024-12-15 10:45:41.266705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.331 [2024-12-15 10:45:41.266793] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.331 [2024-12-15 10:45:41.266813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.331 [2024-12-15 10:45:41.266928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:52.331 [2024-12-15 10:45:41.266947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.332 [2024-12-15 10:45:41.267068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:52.332 [2024-12-15 10:45:41.267089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.332 #37 NEW cov: 11866 ft: 14913 corp: 25/885b lim: 50 exec/s: 37 rss: 69Mb L: 48/50 MS: 1 PersAutoDict- DE: "\000\000\000\014"- 00:07:52.332 [2024-12-15 10:45:41.316242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.332 [2024-12-15 10:45:41.316274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.591 #38 NEW cov: 11866 ft: 14955 corp: 26/898b lim: 50 exec/s: 38 rss: 70Mb L: 13/50 MS: 1 ChangeBit- 00:07:52.591 [2024-12-15 10:45:41.366395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.591 [2024-12-15 10:45:41.366423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.591 #39 NEW cov: 11866 ft: 14973 corp: 27/911b lim: 50 exec/s: 39 rss: 70Mb L: 13/50 MS: 1 ShuffleBytes- 00:07:52.591 [2024-12-15 10:45:41.407321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.591 [2024-12-15 10:45:41.407351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.591 [2024-12-15 10:45:41.407478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.591 [2024-12-15 10:45:41.407500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.591 [2024-12-15 10:45:41.407622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:52.591 [2024-12-15 10:45:41.407642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.591 [2024-12-15 10:45:41.407763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:52.591 [2024-12-15 10:45:41.407786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.591 #40 NEW cov: 11866 ft: 14978 corp: 28/960b lim: 50 exec/s: 40 rss: 70Mb L: 49/50 MS: 1 InsertRepeatedBytes- 00:07:52.591 [2024-12-15 10:45:41.466765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.591 [2024-12-15 10:45:41.466791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.591 #41 NEW cov: 11866 ft: 15024 corp: 29/973b lim: 50 exec/s: 41 rss: 70Mb L: 13/50 MS: 1 CrossOver- 00:07:52.591 [2024-12-15 10:45:41.507604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.591 [2024-12-15 10:45:41.507636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.591 [2024-12-15 10:45:41.507760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.591 [2024-12-15 10:45:41.507780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.591 [2024-12-15 10:45:41.507875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:52.591 [2024-12-15 10:45:41.507896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.591 [2024-12-15 10:45:41.508017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:52.591 [2024-12-15 10:45:41.508037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.591 #42 NEW cov: 11866 ft: 15047 corp: 30/1020b lim: 50 exec/s: 42 rss: 70Mb L: 47/50 MS: 1 InsertRepeatedBytes- 00:07:52.591 [2024-12-15 10:45:41.557591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.591 [2024-12-15 10:45:41.557620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.591 [2024-12-15 10:45:41.557703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.591 [2024-12-15 10:45:41.557726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.591 [2024-12-15 10:45:41.557837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:52.591 [2024-12-15 10:45:41.557856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.591 [2024-12-15 10:45:41.557968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:52.591 [2024-12-15 10:45:41.557989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.591 #43 NEW cov: 11866 ft: 15049 corp: 31/1068b lim: 50 exec/s: 43 rss: 70Mb L: 48/50 MS: 1 CMP- DE: "\305\003\000\000\000\000\000\000"- 00:07:52.591 [2024-12-15 10:45:41.597412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.591 [2024-12-15 10:45:41.597449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.591 [2024-12-15 10:45:41.597564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.592 [2024-12-15 10:45:41.597583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.851 #44 NEW cov: 11866 ft: 15069 corp: 32/1088b lim: 50 exec/s: 44 rss: 70Mb L: 20/50 MS: 1 ChangeBinInt- 00:07:52.851 [2024-12-15 10:45:41.637952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.851 [2024-12-15 10:45:41.637982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.851 [2024-12-15 10:45:41.638098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.851 [2024-12-15 10:45:41.638118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.851 [2024-12-15 10:45:41.638236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:52.851 [2024-12-15 10:45:41.638257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.851 [2024-12-15 10:45:41.638382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:52.851 [2024-12-15 10:45:41.638403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.851 #45 NEW cov: 11866 ft: 15097 corp: 33/1136b lim: 50 exec/s: 45 rss: 70Mb L: 48/50 MS: 1 CopyPart- 00:07:52.851 [2024-12-15 10:45:41.678091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.851 [2024-12-15 10:45:41.678118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.851 [2024-12-15 10:45:41.678197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.851 [2024-12-15 10:45:41.678218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.851 [2024-12-15 10:45:41.678329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:52.851 [2024-12-15 10:45:41.678350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.851 [2024-12-15 10:45:41.678479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:52.851 [2024-12-15 10:45:41.678499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.851 #46 NEW cov: 11866 ft: 15111 corp: 34/1176b lim: 50 exec/s: 46 rss: 70Mb L: 40/50 MS: 1 ChangeBit- 00:07:52.851 [2024-12-15 10:45:41.718222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.851 [2024-12-15 10:45:41.718253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.851 [2024-12-15 10:45:41.718363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.851 [2024-12-15 10:45:41.718384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.851 [2024-12-15 10:45:41.718505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:52.851 [2024-12-15 10:45:41.718527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.851 [2024-12-15 10:45:41.718646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:52.851 [2024-12-15 10:45:41.718666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.851 #47 NEW cov: 11866 ft: 15120 corp: 35/1225b lim: 50 exec/s: 47 rss: 70Mb L: 49/50 MS: 1 InsertRepeatedBytes- 00:07:52.851 [2024-12-15 10:45:41.757652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.851 [2024-12-15 10:45:41.757677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.851 #48 NEW cov: 11866 ft: 15150 corp: 36/1237b lim: 50 exec/s: 48 rss: 70Mb L: 12/50 MS: 1 EraseBytes- 00:07:52.852 [2024-12-15 10:45:41.798555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.852 [2024-12-15 10:45:41.798587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.852 [2024-12-15 10:45:41.798682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:52.852 [2024-12-15 10:45:41.798701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.852 [2024-12-15 10:45:41.798812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:52.852 [2024-12-15 10:45:41.798837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.852 [2024-12-15 10:45:41.798956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:52.852 [2024-12-15 10:45:41.798979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.852 #49 NEW cov: 11866 ft: 15163 corp: 37/1282b lim: 50 exec/s: 49 rss: 70Mb L: 45/50 MS: 1 CrossOver- 00:07:52.852 [2024-12-15 10:45:41.837909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.852 [2024-12-15 10:45:41.837939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.111 #50 NEW cov: 11866 ft: 15181 corp: 38/1294b lim: 50 exec/s: 50 rss: 70Mb L: 12/50 MS: 1 ChangeByte- 00:07:53.111 [2024-12-15 10:45:41.888718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:53.111 [2024-12-15 10:45:41.888749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.111 [2024-12-15 10:45:41.888864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:53.111 [2024-12-15 10:45:41.888882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.111 [2024-12-15 10:45:41.889007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:53.111 [2024-12-15 10:45:41.889030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.111 [2024-12-15 10:45:41.889166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:53.111 [2024-12-15 10:45:41.889188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.111 #51 NEW cov: 11866 ft: 15186 corp: 39/1343b lim: 50 exec/s: 51 rss: 70Mb L: 49/50 MS: 1 ChangeBinInt- 00:07:53.111 [2024-12-15 10:45:41.927933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:53.111 [2024-12-15 10:45:41.927963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.111 [2024-12-15 10:45:41.928103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:53.111 [2024-12-15 10:45:41.928125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.111 #52 NEW cov: 11866 ft: 15207 corp: 40/1364b lim: 50 exec/s: 26 rss: 70Mb L: 21/50 MS: 1 InsertByte- 00:07:53.111 #52 DONE cov: 11866 ft: 15207 corp: 40/1364b lim: 50 exec/s: 26 rss: 70Mb 00:07:53.111 ###### Recommended dictionary. ###### 00:07:53.111 "\000\000\000\014" # Uses: 2 00:07:53.111 "\305\003\000\000\000\000\000\000" # Uses: 0 00:07:53.111 ###### End of recommended dictionary. ###### 00:07:53.111 Done 52 runs in 2 second(s) 00:07:53.111 10:45:42 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_21.conf 00:07:53.111 10:45:42 -- ../common.sh@72 -- # (( i++ )) 00:07:53.111 10:45:42 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:53.111 10:45:42 -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:53.111 10:45:42 -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:53.111 10:45:42 -- nvmf/run.sh@24 -- # local timen=1 00:07:53.111 10:45:42 -- nvmf/run.sh@25 -- # local core=0x1 00:07:53.111 10:45:42 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:53.111 10:45:42 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:53.111 10:45:42 -- nvmf/run.sh@29 -- # printf %02d 22 00:07:53.111 10:45:42 -- nvmf/run.sh@29 -- # port=4422 00:07:53.111 10:45:42 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:53.111 10:45:42 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:53.111 10:45:42 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:53.111 10:45:42 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 -r /var/tmp/spdk22.sock 00:07:53.111 [2024-12-15 10:45:42.108888] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:53.111 [2024-12-15 10:45:42.108953] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1316676 ] 00:07:53.370 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.370 [2024-12-15 10:45:42.292599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.370 [2024-12-15 10:45:42.356473] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:53.370 [2024-12-15 10:45:42.356598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.629 [2024-12-15 10:45:42.414870] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.629 [2024-12-15 10:45:42.431173] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:53.629 INFO: Running with entropic power schedule (0xFF, 100). 00:07:53.629 INFO: Seed: 1462361491 00:07:53.629 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:53.629 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:53.629 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:53.629 INFO: A corpus is not provided, starting from an empty corpus 00:07:53.629 #2 INITED exec/s: 0 rss: 60Mb 00:07:53.629 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:53.629 This may also happen if the target rejected all inputs we tried so far 00:07:53.629 [2024-12-15 10:45:42.497606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.629 [2024-12-15 10:45:42.497645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.629 [2024-12-15 10:45:42.497750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.629 [2024-12-15 10:45:42.497769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.629 [2024-12-15 10:45:42.497887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.629 [2024-12-15 10:45:42.497913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.629 [2024-12-15 10:45:42.498027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:53.629 [2024-12-15 10:45:42.498048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.889 NEW_FUNC[1/671]: 0x461ed8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:53.889 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:53.889 #7 NEW cov: 11663 ft: 11666 corp: 2/83b lim: 85 exec/s: 0 rss: 68Mb L: 82/82 MS: 5 CMP-EraseBytes-ChangeBit-ShuffleBytes-InsertRepeatedBytes- DE: "\006\000"- 00:07:53.889 [2024-12-15 10:45:42.828407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.889 [2024-12-15 10:45:42.828448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.889 [2024-12-15 10:45:42.828583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.889 [2024-12-15 10:45:42.828602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.889 [2024-12-15 10:45:42.828714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.889 [2024-12-15 10:45:42.828732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.889 [2024-12-15 10:45:42.828838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:53.889 [2024-12-15 10:45:42.828856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.889 NEW_FUNC[1/1]: 0x1c59068 in spdk_thread_get_from_ctx /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:797 00:07:53.889 #8 NEW cov: 11778 ft: 12251 corp: 3/165b lim: 85 exec/s: 0 rss: 68Mb L: 82/82 MS: 1 ChangeBinInt- 00:07:53.889 [2024-12-15 10:45:42.878475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.889 [2024-12-15 10:45:42.878508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.889 [2024-12-15 10:45:42.878617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.889 [2024-12-15 10:45:42.878638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.889 [2024-12-15 10:45:42.878754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.889 [2024-12-15 10:45:42.878776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.889 [2024-12-15 10:45:42.878885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:53.889 [2024-12-15 10:45:42.878906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.149 #9 NEW cov: 11784 ft: 12536 corp: 4/247b lim: 85 exec/s: 0 rss: 68Mb L: 82/82 MS: 1 ShuffleBytes- 00:07:54.149 [2024-12-15 10:45:42.918214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.149 [2024-12-15 10:45:42.918240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.149 [2024-12-15 10:45:42.918385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.149 [2024-12-15 10:45:42.918407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.149 #10 NEW cov: 11869 ft: 13199 corp: 5/293b lim: 85 exec/s: 0 rss: 68Mb L: 46/82 MS: 1 InsertRepeatedBytes- 00:07:54.149 [2024-12-15 10:45:42.958672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.149 [2024-12-15 10:45:42.958702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.149 [2024-12-15 10:45:42.958781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.149 [2024-12-15 10:45:42.958803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.149 [2024-12-15 10:45:42.958912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.149 [2024-12-15 10:45:42.958934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.149 [2024-12-15 10:45:42.959055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.149 [2024-12-15 10:45:42.959076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.149 #11 NEW cov: 11869 ft: 13283 corp: 6/364b lim: 85 exec/s: 0 rss: 68Mb L: 71/82 MS: 1 InsertRepeatedBytes- 00:07:54.149 [2024-12-15 10:45:42.998953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.149 [2024-12-15 10:45:42.998986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.149 [2024-12-15 10:45:42.999118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.149 [2024-12-15 10:45:42.999138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.149 [2024-12-15 10:45:42.999258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.149 [2024-12-15 10:45:42.999281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.149 [2024-12-15 10:45:42.999393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.149 [2024-12-15 10:45:42.999411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.149 #12 NEW cov: 11869 ft: 13340 corp: 7/446b lim: 85 exec/s: 0 rss: 68Mb L: 82/82 MS: 1 ChangeByte- 00:07:54.149 [2024-12-15 10:45:43.048992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.149 [2024-12-15 10:45:43.049020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.149 [2024-12-15 10:45:43.049121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.149 [2024-12-15 10:45:43.049139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.149 [2024-12-15 10:45:43.049254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.149 [2024-12-15 10:45:43.049275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.149 [2024-12-15 10:45:43.049394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.149 [2024-12-15 10:45:43.049419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.149 #13 NEW cov: 11869 ft: 13391 corp: 8/528b lim: 85 exec/s: 0 rss: 68Mb L: 82/82 MS: 1 ChangeByte- 00:07:54.149 [2024-12-15 10:45:43.089068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.149 [2024-12-15 10:45:43.089099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.149 [2024-12-15 10:45:43.089174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.149 [2024-12-15 10:45:43.089194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.149 [2024-12-15 10:45:43.089310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.149 [2024-12-15 10:45:43.089330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.149 [2024-12-15 10:45:43.089448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.149 [2024-12-15 10:45:43.089471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.149 #14 NEW cov: 11869 ft: 13442 corp: 9/610b lim: 85 exec/s: 0 rss: 68Mb L: 82/82 MS: 1 ChangeByte- 00:07:54.149 [2024-12-15 10:45:43.129200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.149 [2024-12-15 10:45:43.129230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.149 [2024-12-15 10:45:43.129361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.149 [2024-12-15 10:45:43.129385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.149 [2024-12-15 10:45:43.129503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.149 [2024-12-15 10:45:43.129524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.149 [2024-12-15 10:45:43.129610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.149 [2024-12-15 10:45:43.129634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.149 #15 NEW cov: 11869 ft: 13506 corp: 10/692b lim: 85 exec/s: 0 rss: 68Mb L: 82/82 MS: 1 ChangeBit- 00:07:54.409 [2024-12-15 10:45:43.169575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.409 [2024-12-15 10:45:43.169603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.169680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.409 [2024-12-15 10:45:43.169704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.169786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.409 [2024-12-15 10:45:43.169810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.169924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.409 [2024-12-15 10:45:43.169944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.170062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:54.409 [2024-12-15 10:45:43.170082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:54.409 #16 NEW cov: 11869 ft: 13596 corp: 11/777b lim: 85 exec/s: 0 rss: 68Mb L: 85/85 MS: 1 CrossOver- 00:07:54.409 [2024-12-15 10:45:43.209254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.409 [2024-12-15 10:45:43.209282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.209395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.409 [2024-12-15 10:45:43.209418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.209535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.409 [2024-12-15 10:45:43.209555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.409 #17 NEW cov: 11869 ft: 13949 corp: 12/828b lim: 85 exec/s: 0 rss: 68Mb L: 51/85 MS: 1 EraseBytes- 00:07:54.409 [2024-12-15 10:45:43.249546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.409 [2024-12-15 10:45:43.249580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.249689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.409 [2024-12-15 10:45:43.249706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.249823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.409 [2024-12-15 10:45:43.249842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.249956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.409 [2024-12-15 10:45:43.249981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.409 #18 NEW cov: 11869 ft: 14035 corp: 13/910b lim: 85 exec/s: 0 rss: 69Mb L: 82/85 MS: 1 ChangeByte- 00:07:54.409 [2024-12-15 10:45:43.289674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.409 [2024-12-15 10:45:43.289704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.289823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.409 [2024-12-15 10:45:43.289844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.289957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.409 [2024-12-15 10:45:43.289980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.290090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.409 [2024-12-15 10:45:43.290111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.409 #19 NEW cov: 11869 ft: 14101 corp: 14/992b lim: 85 exec/s: 0 rss: 69Mb L: 82/85 MS: 1 ChangeBit- 00:07:54.409 [2024-12-15 10:45:43.329826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.409 [2024-12-15 10:45:43.329852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.329963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.409 [2024-12-15 10:45:43.329985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.330094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.409 [2024-12-15 10:45:43.330116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.330236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.409 [2024-12-15 10:45:43.330257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.409 #20 NEW cov: 11869 ft: 14107 corp: 15/1076b lim: 85 exec/s: 0 rss: 69Mb L: 84/85 MS: 1 PersAutoDict- DE: "\006\000"- 00:07:54.409 [2024-12-15 10:45:43.369950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.409 [2024-12-15 10:45:43.369985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.370109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.409 [2024-12-15 10:45:43.370129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.370251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.409 [2024-12-15 10:45:43.370267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.370395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.409 [2024-12-15 10:45:43.370418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.409 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:54.409 #21 NEW cov: 11892 ft: 14163 corp: 16/1158b lim: 85 exec/s: 0 rss: 69Mb L: 82/85 MS: 1 CrossOver- 00:07:54.409 [2024-12-15 10:45:43.409865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.409 [2024-12-15 10:45:43.409895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.410008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.409 [2024-12-15 10:45:43.410032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.409 [2024-12-15 10:45:43.410153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.409 [2024-12-15 10:45:43.410172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.669 #22 NEW cov: 11892 ft: 14195 corp: 17/1209b lim: 85 exec/s: 0 rss: 69Mb L: 51/85 MS: 1 ChangeBinInt- 00:07:54.669 [2024-12-15 10:45:43.449673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.669 [2024-12-15 10:45:43.449697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.669 [2024-12-15 10:45:43.449819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.669 [2024-12-15 10:45:43.449844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.669 #23 NEW cov: 11892 ft: 14222 corp: 18/1255b lim: 85 exec/s: 23 rss: 69Mb L: 46/85 MS: 1 CopyPart- 00:07:54.669 [2024-12-15 10:45:43.490268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.669 [2024-12-15 10:45:43.490297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.669 [2024-12-15 10:45:43.490392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.669 [2024-12-15 10:45:43.490418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.669 [2024-12-15 10:45:43.490554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.669 [2024-12-15 10:45:43.490577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.669 [2024-12-15 10:45:43.490704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.669 [2024-12-15 10:45:43.490725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.669 #24 NEW cov: 11892 ft: 14249 corp: 19/1337b lim: 85 exec/s: 24 rss: 69Mb L: 82/85 MS: 1 CopyPart- 00:07:54.669 [2024-12-15 10:45:43.530349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.669 [2024-12-15 10:45:43.530383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.669 [2024-12-15 10:45:43.530498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.669 [2024-12-15 10:45:43.530524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.669 [2024-12-15 10:45:43.530640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.669 [2024-12-15 10:45:43.530661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.669 [2024-12-15 10:45:43.530782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.669 [2024-12-15 10:45:43.530808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.669 #25 NEW cov: 11892 ft: 14273 corp: 20/1419b lim: 85 exec/s: 25 rss: 69Mb L: 82/85 MS: 1 PersAutoDict- DE: "\006\000"- 00:07:54.669 [2024-12-15 10:45:43.570784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.669 [2024-12-15 10:45:43.570818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.669 [2024-12-15 10:45:43.570932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.669 [2024-12-15 10:45:43.570957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.669 [2024-12-15 10:45:43.571074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.669 [2024-12-15 10:45:43.571095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.669 [2024-12-15 10:45:43.571213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.669 [2024-12-15 10:45:43.571239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.669 [2024-12-15 10:45:43.571355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:54.669 [2024-12-15 10:45:43.571376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:54.669 #26 NEW cov: 11892 ft: 14292 corp: 21/1504b lim: 85 exec/s: 26 rss: 69Mb L: 85/85 MS: 1 ShuffleBytes- 00:07:54.669 [2024-12-15 10:45:43.610730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.669 [2024-12-15 10:45:43.610761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.670 [2024-12-15 10:45:43.610887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.670 [2024-12-15 10:45:43.610918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.670 [2024-12-15 10:45:43.611032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.670 [2024-12-15 10:45:43.611056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.670 [2024-12-15 10:45:43.611174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.670 [2024-12-15 10:45:43.611197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.670 #27 NEW cov: 11892 ft: 14324 corp: 22/1586b lim: 85 exec/s: 27 rss: 69Mb L: 82/85 MS: 1 CrossOver- 00:07:54.670 [2024-12-15 10:45:43.650829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.670 [2024-12-15 10:45:43.650864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.670 [2024-12-15 10:45:43.650975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.670 [2024-12-15 10:45:43.651000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.670 [2024-12-15 10:45:43.651121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.670 [2024-12-15 10:45:43.651142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.670 [2024-12-15 10:45:43.651266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.670 [2024-12-15 10:45:43.651288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.670 #28 NEW cov: 11892 ft: 14342 corp: 23/1668b lim: 85 exec/s: 28 rss: 69Mb L: 82/85 MS: 1 ChangeByte- 00:07:54.929 [2024-12-15 10:45:43.690875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.929 [2024-12-15 10:45:43.690910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.929 [2024-12-15 10:45:43.691018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.929 [2024-12-15 10:45:43.691040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.929 [2024-12-15 10:45:43.691167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.929 [2024-12-15 10:45:43.691190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.929 [2024-12-15 10:45:43.691316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.929 [2024-12-15 10:45:43.691336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.929 #29 NEW cov: 11892 ft: 14358 corp: 24/1750b lim: 85 exec/s: 29 rss: 69Mb L: 82/85 MS: 1 PersAutoDict- DE: "\006\000"- 00:07:54.929 [2024-12-15 10:45:43.731008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.929 [2024-12-15 10:45:43.731045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.929 [2024-12-15 10:45:43.731166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.929 [2024-12-15 10:45:43.731188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.929 [2024-12-15 10:45:43.731310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.930 [2024-12-15 10:45:43.731330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.930 [2024-12-15 10:45:43.731460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.930 [2024-12-15 10:45:43.731481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.930 #30 NEW cov: 11892 ft: 14380 corp: 25/1834b lim: 85 exec/s: 30 rss: 69Mb L: 84/85 MS: 1 ShuffleBytes- 00:07:54.930 [2024-12-15 10:45:43.781203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.930 [2024-12-15 10:45:43.781232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.930 [2024-12-15 10:45:43.781344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.930 [2024-12-15 10:45:43.781366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.930 [2024-12-15 10:45:43.781480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.930 [2024-12-15 10:45:43.781504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.930 [2024-12-15 10:45:43.781626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.930 [2024-12-15 10:45:43.781647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.930 #31 NEW cov: 11892 ft: 14400 corp: 26/1917b lim: 85 exec/s: 31 rss: 69Mb L: 83/85 MS: 1 InsertByte- 00:07:54.930 [2024-12-15 10:45:43.830904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.930 [2024-12-15 10:45:43.830930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.930 [2024-12-15 10:45:43.831040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.930 [2024-12-15 10:45:43.831060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.930 #32 NEW cov: 11892 ft: 14407 corp: 27/1964b lim: 85 exec/s: 32 rss: 69Mb L: 47/85 MS: 1 EraseBytes- 00:07:54.930 [2024-12-15 10:45:43.881476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.930 [2024-12-15 10:45:43.881507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.930 [2024-12-15 10:45:43.881582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.930 [2024-12-15 10:45:43.881600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.930 [2024-12-15 10:45:43.881715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.930 [2024-12-15 10:45:43.881735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.930 [2024-12-15 10:45:43.881847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.930 [2024-12-15 10:45:43.881869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.930 #33 NEW cov: 11892 ft: 14415 corp: 28/2046b lim: 85 exec/s: 33 rss: 69Mb L: 82/85 MS: 1 ChangeBinInt- 00:07:54.930 [2024-12-15 10:45:43.921621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.930 [2024-12-15 10:45:43.921654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.930 [2024-12-15 10:45:43.921770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.930 [2024-12-15 10:45:43.921804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.930 [2024-12-15 10:45:43.921916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.930 [2024-12-15 10:45:43.921940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.930 [2024-12-15 10:45:43.922058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.930 [2024-12-15 10:45:43.922086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.930 #34 NEW cov: 11892 ft: 14440 corp: 29/2130b lim: 85 exec/s: 34 rss: 69Mb L: 84/85 MS: 1 PersAutoDict- DE: "\006\000"- 00:07:55.189 [2024-12-15 10:45:43.961780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:55.189 [2024-12-15 10:45:43.961810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.189 [2024-12-15 10:45:43.961903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:55.189 [2024-12-15 10:45:43.961923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.189 [2024-12-15 10:45:43.962040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:55.189 [2024-12-15 10:45:43.962064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.189 [2024-12-15 10:45:43.962185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:55.189 [2024-12-15 10:45:43.962211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.189 #35 NEW cov: 11892 ft: 14472 corp: 30/2214b lim: 85 exec/s: 35 rss: 69Mb L: 84/85 MS: 1 ChangeBit- 00:07:55.189 [2024-12-15 10:45:44.001855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:55.189 [2024-12-15 10:45:44.001886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.190 [2024-12-15 10:45:44.001981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:55.190 [2024-12-15 10:45:44.002004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.190 [2024-12-15 10:45:44.002118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:55.190 [2024-12-15 10:45:44.002140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.190 [2024-12-15 10:45:44.002256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:55.190 [2024-12-15 10:45:44.002275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.190 #36 NEW cov: 11892 ft: 14481 corp: 31/2296b lim: 85 exec/s: 36 rss: 69Mb L: 82/85 MS: 1 ChangeBinInt- 00:07:55.190 [2024-12-15 10:45:44.041936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:55.190 [2024-12-15 10:45:44.041966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.190 [2024-12-15 10:45:44.042065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:55.190 [2024-12-15 10:45:44.042090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.190 [2024-12-15 10:45:44.042204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:55.190 [2024-12-15 10:45:44.042229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.190 [2024-12-15 10:45:44.042355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:55.190 [2024-12-15 10:45:44.042378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.190 #37 NEW cov: 11892 ft: 14491 corp: 32/2378b lim: 85 exec/s: 37 rss: 70Mb L: 82/85 MS: 1 ShuffleBytes- 00:07:55.190 [2024-12-15 10:45:44.091963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:55.190 [2024-12-15 10:45:44.091995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.190 [2024-12-15 10:45:44.092106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:55.190 [2024-12-15 10:45:44.092132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.190 [2024-12-15 10:45:44.092258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:55.190 [2024-12-15 10:45:44.092281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.190 #38 NEW cov: 11892 ft: 14503 corp: 33/2429b lim: 85 exec/s: 38 rss: 70Mb L: 51/85 MS: 1 EraseBytes- 00:07:55.190 [2024-12-15 10:45:44.132252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:55.190 [2024-12-15 10:45:44.132282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.190 [2024-12-15 10:45:44.132395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:55.190 [2024-12-15 10:45:44.132411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.190 [2024-12-15 10:45:44.132534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:55.190 [2024-12-15 10:45:44.132555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.190 [2024-12-15 10:45:44.132677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:55.190 [2024-12-15 10:45:44.132698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.190 #39 NEW cov: 11892 ft: 14519 corp: 34/2511b lim: 85 exec/s: 39 rss: 70Mb L: 82/85 MS: 1 ChangeBinInt- 00:07:55.190 [2024-12-15 10:45:44.172485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:55.190 [2024-12-15 10:45:44.172515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.190 [2024-12-15 10:45:44.172640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:55.190 [2024-12-15 10:45:44.172661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.190 [2024-12-15 10:45:44.172770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:55.190 [2024-12-15 10:45:44.172793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.190 [2024-12-15 10:45:44.172910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:55.190 [2024-12-15 10:45:44.172929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.190 #40 NEW cov: 11892 ft: 14589 corp: 35/2593b lim: 85 exec/s: 40 rss: 70Mb L: 82/85 MS: 1 ChangeBinInt- 00:07:55.449 [2024-12-15 10:45:44.222589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:55.449 [2024-12-15 10:45:44.222620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.449 [2024-12-15 10:45:44.222731] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:55.449 [2024-12-15 10:45:44.222761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.449 [2024-12-15 10:45:44.222881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:55.449 [2024-12-15 10:45:44.222900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.449 [2024-12-15 10:45:44.223018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:55.449 [2024-12-15 10:45:44.223036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.449 #41 NEW cov: 11892 ft: 14597 corp: 36/2675b lim: 85 exec/s: 41 rss: 70Mb L: 82/85 MS: 1 CrossOver- 00:07:55.449 [2024-12-15 10:45:44.272665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:55.449 [2024-12-15 10:45:44.272694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.449 [2024-12-15 10:45:44.272793] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:55.449 [2024-12-15 10:45:44.272815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.450 [2024-12-15 10:45:44.272928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:55.450 [2024-12-15 10:45:44.272952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.450 [2024-12-15 10:45:44.273071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:55.450 [2024-12-15 10:45:44.273094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.450 #42 NEW cov: 11892 ft: 14600 corp: 37/2758b lim: 85 exec/s: 42 rss: 70Mb L: 83/85 MS: 1 InsertByte- 00:07:55.450 [2024-12-15 10:45:44.312785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:55.450 [2024-12-15 10:45:44.312814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.450 [2024-12-15 10:45:44.312923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:55.450 [2024-12-15 10:45:44.312944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.450 [2024-12-15 10:45:44.313056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:55.450 [2024-12-15 10:45:44.313077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.450 [2024-12-15 10:45:44.313196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:55.450 [2024-12-15 10:45:44.313218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.450 #43 NEW cov: 11892 ft: 14607 corp: 38/2842b lim: 85 exec/s: 43 rss: 70Mb L: 84/85 MS: 1 PersAutoDict- DE: "\006\000"- 00:07:55.450 [2024-12-15 10:45:44.352632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:55.450 [2024-12-15 10:45:44.352664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.450 [2024-12-15 10:45:44.352776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:55.450 [2024-12-15 10:45:44.352796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.450 [2024-12-15 10:45:44.352922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:55.450 [2024-12-15 10:45:44.352943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.450 #44 NEW cov: 11892 ft: 14625 corp: 39/2897b lim: 85 exec/s: 44 rss: 70Mb L: 55/85 MS: 1 InsertRepeatedBytes- 00:07:55.450 [2024-12-15 10:45:44.392718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:55.450 [2024-12-15 10:45:44.392749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.450 [2024-12-15 10:45:44.392854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:55.450 [2024-12-15 10:45:44.392873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.450 [2024-12-15 10:45:44.392988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:55.450 [2024-12-15 10:45:44.393007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.450 #45 NEW cov: 11892 ft: 14634 corp: 40/2955b lim: 85 exec/s: 45 rss: 70Mb L: 58/85 MS: 1 CopyPart- 00:07:55.450 [2024-12-15 10:45:44.433034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:55.450 [2024-12-15 10:45:44.433066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.450 [2024-12-15 10:45:44.433177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:55.450 [2024-12-15 10:45:44.433213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.450 [2024-12-15 10:45:44.433331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:55.450 [2024-12-15 10:45:44.433351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.450 [2024-12-15 10:45:44.433477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:55.450 [2024-12-15 10:45:44.433501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.450 #46 NEW cov: 11892 ft: 14639 corp: 41/3037b lim: 85 exec/s: 46 rss: 70Mb L: 82/85 MS: 1 ChangeByte- 00:07:55.709 [2024-12-15 10:45:44.473399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:55.709 [2024-12-15 10:45:44.473435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.709 [2024-12-15 10:45:44.473565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:55.709 [2024-12-15 10:45:44.473586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.709 [2024-12-15 10:45:44.473712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:55.709 [2024-12-15 10:45:44.473746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.709 [2024-12-15 10:45:44.473863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:55.710 [2024-12-15 10:45:44.473882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.710 [2024-12-15 10:45:44.474006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:55.710 [2024-12-15 10:45:44.474029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:55.710 #47 NEW cov: 11892 ft: 14649 corp: 42/3122b lim: 85 exec/s: 23 rss: 70Mb L: 85/85 MS: 1 CopyPart- 00:07:55.710 #47 DONE cov: 11892 ft: 14649 corp: 42/3122b lim: 85 exec/s: 23 rss: 70Mb 00:07:55.710 ###### Recommended dictionary. ###### 00:07:55.710 "\006\000" # Uses: 5 00:07:55.710 ###### End of recommended dictionary. ###### 00:07:55.710 Done 47 runs in 2 second(s) 00:07:55.710 10:45:44 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_22.conf 00:07:55.710 10:45:44 -- ../common.sh@72 -- # (( i++ )) 00:07:55.710 10:45:44 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:55.710 10:45:44 -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:55.710 10:45:44 -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:55.710 10:45:44 -- nvmf/run.sh@24 -- # local timen=1 00:07:55.710 10:45:44 -- nvmf/run.sh@25 -- # local core=0x1 00:07:55.710 10:45:44 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:55.710 10:45:44 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:55.710 10:45:44 -- nvmf/run.sh@29 -- # printf %02d 23 00:07:55.710 10:45:44 -- nvmf/run.sh@29 -- # port=4423 00:07:55.710 10:45:44 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:55.710 10:45:44 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:55.710 10:45:44 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:55.710 10:45:44 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 -r /var/tmp/spdk23.sock 00:07:55.710 [2024-12-15 10:45:44.656945] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:55.710 [2024-12-15 10:45:44.657010] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1317195 ] 00:07:55.710 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.969 [2024-12-15 10:45:44.835729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.969 [2024-12-15 10:45:44.898058] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:55.969 [2024-12-15 10:45:44.898184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.969 [2024-12-15 10:45:44.955923] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.969 [2024-12-15 10:45:44.972190] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:56.230 INFO: Running with entropic power schedule (0xFF, 100). 00:07:56.230 INFO: Seed: 4002337436 00:07:56.230 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:56.230 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:56.230 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:56.230 INFO: A corpus is not provided, starting from an empty corpus 00:07:56.230 #2 INITED exec/s: 0 rss: 61Mb 00:07:56.230 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:56.230 This may also happen if the target rejected all inputs we tried so far 00:07:56.230 [2024-12-15 10:45:45.020693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.230 [2024-12-15 10:45:45.020722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.490 NEW_FUNC[1/671]: 0x465118 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:56.490 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:56.490 #10 NEW cov: 11598 ft: 11592 corp: 2/9b lim: 25 exec/s: 0 rss: 68Mb L: 8/8 MS: 3 ChangeByte-CrossOver-InsertRepeatedBytes- 00:07:56.490 [2024-12-15 10:45:45.352571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.490 [2024-12-15 10:45:45.352636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.490 [2024-12-15 10:45:45.352780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.490 [2024-12-15 10:45:45.352815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.490 [2024-12-15 10:45:45.352964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.490 [2024-12-15 10:45:45.352997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.490 #12 NEW cov: 11711 ft: 12753 corp: 3/26b lim: 25 exec/s: 0 rss: 68Mb L: 17/17 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:07:56.490 [2024-12-15 10:45:45.402204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.490 [2024-12-15 10:45:45.402235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.490 #13 NEW cov: 11717 ft: 12946 corp: 4/34b lim: 25 exec/s: 0 rss: 68Mb L: 8/17 MS: 1 ChangeBit- 00:07:56.490 [2024-12-15 10:45:45.452636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.490 [2024-12-15 10:45:45.452667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.490 [2024-12-15 10:45:45.452785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.490 [2024-12-15 10:45:45.452805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.490 [2024-12-15 10:45:45.452921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.490 [2024-12-15 10:45:45.452944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.490 #14 NEW cov: 11802 ft: 13178 corp: 5/51b lim: 25 exec/s: 0 rss: 68Mb L: 17/17 MS: 1 ShuffleBytes- 00:07:56.490 [2024-12-15 10:45:45.503089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.490 [2024-12-15 10:45:45.503120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.490 [2024-12-15 10:45:45.503237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.490 [2024-12-15 10:45:45.503257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.490 [2024-12-15 10:45:45.503371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.490 [2024-12-15 10:45:45.503394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.490 [2024-12-15 10:45:45.503522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.490 [2024-12-15 10:45:45.503544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.749 #20 NEW cov: 11802 ft: 13691 corp: 6/74b lim: 25 exec/s: 0 rss: 68Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:07:56.749 [2024-12-15 10:45:45.542547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.749 [2024-12-15 10:45:45.542578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.749 #21 NEW cov: 11802 ft: 13765 corp: 7/82b lim: 25 exec/s: 0 rss: 68Mb L: 8/23 MS: 1 ChangeBinInt- 00:07:56.749 [2024-12-15 10:45:45.582692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.749 [2024-12-15 10:45:45.582723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.749 #22 NEW cov: 11802 ft: 13825 corp: 8/91b lim: 25 exec/s: 0 rss: 68Mb L: 9/23 MS: 1 InsertByte- 00:07:56.749 [2024-12-15 10:45:45.632866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.749 [2024-12-15 10:45:45.632893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.749 #23 NEW cov: 11802 ft: 13837 corp: 9/100b lim: 25 exec/s: 0 rss: 68Mb L: 9/23 MS: 1 InsertByte- 00:07:56.749 [2024-12-15 10:45:45.673570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.749 [2024-12-15 10:45:45.673602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.749 [2024-12-15 10:45:45.673738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.749 [2024-12-15 10:45:45.673760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.749 [2024-12-15 10:45:45.673876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.749 [2024-12-15 10:45:45.673901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.749 [2024-12-15 10:45:45.674024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.749 [2024-12-15 10:45:45.674043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.749 #24 NEW cov: 11802 ft: 13873 corp: 10/122b lim: 25 exec/s: 0 rss: 68Mb L: 22/23 MS: 1 InsertRepeatedBytes- 00:07:56.749 [2024-12-15 10:45:45.713425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.749 [2024-12-15 10:45:45.713455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.749 [2024-12-15 10:45:45.713585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.749 [2024-12-15 10:45:45.713607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.749 [2024-12-15 10:45:45.713719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.750 [2024-12-15 10:45:45.713740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.750 #25 NEW cov: 11802 ft: 13997 corp: 11/139b lim: 25 exec/s: 0 rss: 68Mb L: 17/23 MS: 1 ChangeBit- 00:07:56.750 [2024-12-15 10:45:45.753807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.750 [2024-12-15 10:45:45.753837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.750 [2024-12-15 10:45:45.753918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.750 [2024-12-15 10:45:45.753942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.750 [2024-12-15 10:45:45.754059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.750 [2024-12-15 10:45:45.754080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.750 [2024-12-15 10:45:45.754202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.750 [2024-12-15 10:45:45.754224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.009 #26 NEW cov: 11802 ft: 14078 corp: 12/162b lim: 25 exec/s: 0 rss: 68Mb L: 23/23 MS: 1 CopyPart- 00:07:57.009 [2024-12-15 10:45:45.793678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.009 [2024-12-15 10:45:45.793708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.009 [2024-12-15 10:45:45.793829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.009 [2024-12-15 10:45:45.793852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.009 [2024-12-15 10:45:45.793971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.009 [2024-12-15 10:45:45.793989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.009 #27 NEW cov: 11802 ft: 14123 corp: 13/179b lim: 25 exec/s: 0 rss: 68Mb L: 17/23 MS: 1 ChangeByte- 00:07:57.009 [2024-12-15 10:45:45.833723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.009 [2024-12-15 10:45:45.833753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.009 [2024-12-15 10:45:45.833851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.009 [2024-12-15 10:45:45.833873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.009 [2024-12-15 10:45:45.833983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.009 [2024-12-15 10:45:45.834006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.009 #28 NEW cov: 11802 ft: 14153 corp: 14/196b lim: 25 exec/s: 0 rss: 68Mb L: 17/23 MS: 1 ChangeBinInt- 00:07:57.009 [2024-12-15 10:45:45.873746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.009 [2024-12-15 10:45:45.873770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.009 [2024-12-15 10:45:45.873892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.009 [2024-12-15 10:45:45.873911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.009 #29 NEW cov: 11802 ft: 14370 corp: 15/210b lim: 25 exec/s: 0 rss: 68Mb L: 14/23 MS: 1 CopyPart- 00:07:57.009 [2024-12-15 10:45:45.914060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.009 [2024-12-15 10:45:45.914088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.009 [2024-12-15 10:45:45.914225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.009 [2024-12-15 10:45:45.914248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.010 [2024-12-15 10:45:45.914365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.010 [2024-12-15 10:45:45.914385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.010 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:57.010 #30 NEW cov: 11825 ft: 14434 corp: 16/227b lim: 25 exec/s: 0 rss: 69Mb L: 17/23 MS: 1 ChangeBit- 00:07:57.010 [2024-12-15 10:45:45.964321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.010 [2024-12-15 10:45:45.964351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.010 [2024-12-15 10:45:45.964449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.010 [2024-12-15 10:45:45.964485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.010 [2024-12-15 10:45:45.964600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.010 [2024-12-15 10:45:45.964618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.010 [2024-12-15 10:45:45.964741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:57.010 [2024-12-15 10:45:45.964758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.010 #31 NEW cov: 11825 ft: 14454 corp: 17/249b lim: 25 exec/s: 0 rss: 69Mb L: 22/23 MS: 1 ChangeBinInt- 00:07:57.010 [2024-12-15 10:45:46.003935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.010 [2024-12-15 10:45:46.003960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.269 #32 NEW cov: 11825 ft: 14483 corp: 18/254b lim: 25 exec/s: 32 rss: 69Mb L: 5/23 MS: 1 EraseBytes- 00:07:57.269 [2024-12-15 10:45:46.044617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.269 [2024-12-15 10:45:46.044647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.269 [2024-12-15 10:45:46.044738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.269 [2024-12-15 10:45:46.044758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.269 [2024-12-15 10:45:46.044876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.269 [2024-12-15 10:45:46.044896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.269 [2024-12-15 10:45:46.045015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:57.269 [2024-12-15 10:45:46.045036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.269 #33 NEW cov: 11825 ft: 14505 corp: 19/276b lim: 25 exec/s: 33 rss: 69Mb L: 22/23 MS: 1 ChangeBinInt- 00:07:57.269 [2024-12-15 10:45:46.084735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.269 [2024-12-15 10:45:46.084763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.269 [2024-12-15 10:45:46.084860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.269 [2024-12-15 10:45:46.084883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.269 [2024-12-15 10:45:46.084998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.269 [2024-12-15 10:45:46.085016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.269 [2024-12-15 10:45:46.085142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:57.269 [2024-12-15 10:45:46.085161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.269 #34 NEW cov: 11825 ft: 14514 corp: 20/298b lim: 25 exec/s: 34 rss: 69Mb L: 22/23 MS: 1 InsertRepeatedBytes- 00:07:57.269 [2024-12-15 10:45:46.134939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.269 [2024-12-15 10:45:46.134969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.270 [2024-12-15 10:45:46.135054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.270 [2024-12-15 10:45:46.135078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.270 [2024-12-15 10:45:46.135192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.270 [2024-12-15 10:45:46.135214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.270 [2024-12-15 10:45:46.135336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:57.270 [2024-12-15 10:45:46.135359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.270 #35 NEW cov: 11825 ft: 14595 corp: 21/320b lim: 25 exec/s: 35 rss: 69Mb L: 22/23 MS: 1 ChangeBit- 00:07:57.270 [2024-12-15 10:45:46.184880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.270 [2024-12-15 10:45:46.184911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.270 [2024-12-15 10:45:46.185009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.270 [2024-12-15 10:45:46.185027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.270 [2024-12-15 10:45:46.185164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.270 [2024-12-15 10:45:46.185185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.270 #36 NEW cov: 11825 ft: 14650 corp: 22/337b lim: 25 exec/s: 36 rss: 69Mb L: 17/23 MS: 1 ChangeBit- 00:07:57.270 [2024-12-15 10:45:46.224848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.270 [2024-12-15 10:45:46.224873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.270 [2024-12-15 10:45:46.224995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.270 [2024-12-15 10:45:46.225015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.270 #37 NEW cov: 11825 ft: 14682 corp: 23/351b lim: 25 exec/s: 37 rss: 69Mb L: 14/23 MS: 1 ShuffleBytes- 00:07:57.270 [2024-12-15 10:45:46.264985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.270 [2024-12-15 10:45:46.265009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.270 [2024-12-15 10:45:46.265130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.270 [2024-12-15 10:45:46.265146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.529 #38 NEW cov: 11825 ft: 14718 corp: 24/365b lim: 25 exec/s: 38 rss: 69Mb L: 14/23 MS: 1 EraseBytes- 00:07:57.529 [2024-12-15 10:45:46.305041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.529 [2024-12-15 10:45:46.305068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.529 [2024-12-15 10:45:46.305187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.529 [2024-12-15 10:45:46.305210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.529 #39 NEW cov: 11825 ft: 14726 corp: 25/375b lim: 25 exec/s: 39 rss: 69Mb L: 10/23 MS: 1 EraseBytes- 00:07:57.529 [2024-12-15 10:45:46.345276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.529 [2024-12-15 10:45:46.345304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.529 [2024-12-15 10:45:46.345419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.529 [2024-12-15 10:45:46.345441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.529 #40 NEW cov: 11825 ft: 14739 corp: 26/386b lim: 25 exec/s: 40 rss: 69Mb L: 11/23 MS: 1 EraseBytes- 00:07:57.529 [2024-12-15 10:45:46.385672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.529 [2024-12-15 10:45:46.385701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.529 [2024-12-15 10:45:46.385776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.529 [2024-12-15 10:45:46.385797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.529 [2024-12-15 10:45:46.385915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.529 [2024-12-15 10:45:46.385937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.529 [2024-12-15 10:45:46.386059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:57.529 [2024-12-15 10:45:46.386076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.529 #41 NEW cov: 11825 ft: 14757 corp: 27/409b lim: 25 exec/s: 41 rss: 69Mb L: 23/23 MS: 1 InsertByte- 00:07:57.529 [2024-12-15 10:45:46.425472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.529 [2024-12-15 10:45:46.425512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.529 [2024-12-15 10:45:46.425645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.529 [2024-12-15 10:45:46.425667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.529 #42 NEW cov: 11825 ft: 14770 corp: 28/420b lim: 25 exec/s: 42 rss: 69Mb L: 11/23 MS: 1 CrossOver- 00:07:57.529 [2024-12-15 10:45:46.465506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.529 [2024-12-15 10:45:46.465539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.529 [2024-12-15 10:45:46.465684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.529 [2024-12-15 10:45:46.465702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.529 #43 NEW cov: 11825 ft: 14810 corp: 29/431b lim: 25 exec/s: 43 rss: 69Mb L: 11/23 MS: 1 ChangeBinInt- 00:07:57.529 [2024-12-15 10:45:46.515867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.529 [2024-12-15 10:45:46.515899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.529 [2024-12-15 10:45:46.516039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.529 [2024-12-15 10:45:46.516062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.529 [2024-12-15 10:45:46.516186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.529 [2024-12-15 10:45:46.516211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.529 #44 NEW cov: 11825 ft: 14824 corp: 30/448b lim: 25 exec/s: 44 rss: 69Mb L: 17/23 MS: 1 ChangeBit- 00:07:57.789 [2024-12-15 10:45:46.556225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.789 [2024-12-15 10:45:46.556259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.789 [2024-12-15 10:45:46.556393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.789 [2024-12-15 10:45:46.556418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.789 [2024-12-15 10:45:46.556538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.789 [2024-12-15 10:45:46.556561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.789 [2024-12-15 10:45:46.556677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:57.789 [2024-12-15 10:45:46.556701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.789 #45 NEW cov: 11825 ft: 14830 corp: 31/472b lim: 25 exec/s: 45 rss: 70Mb L: 24/24 MS: 1 CMP- DE: "\013\000"- 00:07:57.789 [2024-12-15 10:45:46.606260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.789 [2024-12-15 10:45:46.606289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.789 [2024-12-15 10:45:46.606393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.789 [2024-12-15 10:45:46.606412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.789 [2024-12-15 10:45:46.606541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.789 [2024-12-15 10:45:46.606564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.789 [2024-12-15 10:45:46.606685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:57.789 [2024-12-15 10:45:46.606705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.789 #46 NEW cov: 11825 ft: 14840 corp: 32/495b lim: 25 exec/s: 46 rss: 70Mb L: 23/24 MS: 1 InsertByte- 00:07:57.789 [2024-12-15 10:45:46.646252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.789 [2024-12-15 10:45:46.646283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.789 [2024-12-15 10:45:46.646419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.789 [2024-12-15 10:45:46.646436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.789 [2024-12-15 10:45:46.646566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.789 [2024-12-15 10:45:46.646590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.789 #47 NEW cov: 11825 ft: 14910 corp: 33/512b lim: 25 exec/s: 47 rss: 70Mb L: 17/24 MS: 1 ChangeBit- 00:07:57.789 [2024-12-15 10:45:46.696419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.789 [2024-12-15 10:45:46.696453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.789 [2024-12-15 10:45:46.696559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.789 [2024-12-15 10:45:46.696581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.789 [2024-12-15 10:45:46.696707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.789 [2024-12-15 10:45:46.696727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.789 #48 NEW cov: 11825 ft: 14919 corp: 34/529b lim: 25 exec/s: 48 rss: 70Mb L: 17/24 MS: 1 InsertRepeatedBytes- 00:07:57.789 [2024-12-15 10:45:46.736730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.789 [2024-12-15 10:45:46.736759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.789 [2024-12-15 10:45:46.736848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.789 [2024-12-15 10:45:46.736869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.789 [2024-12-15 10:45:46.736990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.789 [2024-12-15 10:45:46.737012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.789 [2024-12-15 10:45:46.737134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:57.789 [2024-12-15 10:45:46.737154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.789 #49 NEW cov: 11825 ft: 14943 corp: 35/551b lim: 25 exec/s: 49 rss: 70Mb L: 22/24 MS: 1 InsertRepeatedBytes- 00:07:57.789 [2024-12-15 10:45:46.776880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.789 [2024-12-15 10:45:46.776908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.789 [2024-12-15 10:45:46.777024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.789 [2024-12-15 10:45:46.777050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.790 [2024-12-15 10:45:46.777172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.790 [2024-12-15 10:45:46.777195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.790 [2024-12-15 10:45:46.777314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:57.790 [2024-12-15 10:45:46.777338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:58.049 #50 NEW cov: 11825 ft: 14977 corp: 36/574b lim: 25 exec/s: 50 rss: 70Mb L: 23/24 MS: 1 InsertByte- 00:07:58.049 [2024-12-15 10:45:46.826894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:58.049 [2024-12-15 10:45:46.826927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.049 [2024-12-15 10:45:46.827038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:58.049 [2024-12-15 10:45:46.827061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.049 [2024-12-15 10:45:46.827190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:58.049 [2024-12-15 10:45:46.827212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.049 #51 NEW cov: 11825 ft: 14982 corp: 37/591b lim: 25 exec/s: 51 rss: 70Mb L: 17/24 MS: 1 CopyPart- 00:07:58.049 [2024-12-15 10:45:46.866933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:58.049 [2024-12-15 10:45:46.866963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.049 [2024-12-15 10:45:46.867080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:58.049 [2024-12-15 10:45:46.867104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.049 [2024-12-15 10:45:46.867229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:58.049 [2024-12-15 10:45:46.867250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.049 #52 NEW cov: 11825 ft: 14989 corp: 38/610b lim: 25 exec/s: 52 rss: 70Mb L: 19/24 MS: 1 PersAutoDict- DE: "\013\000"- 00:07:58.049 [2024-12-15 10:45:46.906928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:58.049 [2024-12-15 10:45:46.906962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.049 [2024-12-15 10:45:46.907097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:58.049 [2024-12-15 10:45:46.907121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.049 #53 NEW cov: 11825 ft: 15002 corp: 39/624b lim: 25 exec/s: 53 rss: 70Mb L: 14/24 MS: 1 CopyPart- 00:07:58.049 [2024-12-15 10:45:46.957504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:58.049 [2024-12-15 10:45:46.957536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.049 [2024-12-15 10:45:46.957633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:58.049 [2024-12-15 10:45:46.957649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.049 [2024-12-15 10:45:46.957762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:58.049 [2024-12-15 10:45:46.957783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.049 [2024-12-15 10:45:46.957901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:58.050 [2024-12-15 10:45:46.957920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:58.050 [2024-12-15 10:45:46.958049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:58.050 [2024-12-15 10:45:46.958066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:58.050 #54 NEW cov: 11825 ft: 15033 corp: 40/649b lim: 25 exec/s: 54 rss: 70Mb L: 25/25 MS: 1 PersAutoDict- DE: "\013\000"- 00:07:58.050 [2024-12-15 10:45:47.007554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:58.050 [2024-12-15 10:45:47.007587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.050 [2024-12-15 10:45:47.007686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:58.050 [2024-12-15 10:45:47.007708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.050 [2024-12-15 10:45:47.007836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:58.050 [2024-12-15 10:45:47.007858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.050 [2024-12-15 10:45:47.007981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:58.050 [2024-12-15 10:45:47.008006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:58.050 #55 NEW cov: 11825 ft: 15049 corp: 41/672b lim: 25 exec/s: 27 rss: 70Mb L: 23/25 MS: 1 InsertByte- 00:07:58.050 #55 DONE cov: 11825 ft: 15049 corp: 41/672b lim: 25 exec/s: 27 rss: 70Mb 00:07:58.050 ###### Recommended dictionary. ###### 00:07:58.050 "\013\000" # Uses: 2 00:07:58.050 ###### End of recommended dictionary. ###### 00:07:58.050 Done 55 runs in 2 second(s) 00:07:58.309 10:45:47 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_23.conf 00:07:58.309 10:45:47 -- ../common.sh@72 -- # (( i++ )) 00:07:58.309 10:45:47 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:58.309 10:45:47 -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:58.309 10:45:47 -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:58.309 10:45:47 -- nvmf/run.sh@24 -- # local timen=1 00:07:58.309 10:45:47 -- nvmf/run.sh@25 -- # local core=0x1 00:07:58.309 10:45:47 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:58.309 10:45:47 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:58.309 10:45:47 -- nvmf/run.sh@29 -- # printf %02d 24 00:07:58.309 10:45:47 -- nvmf/run.sh@29 -- # port=4424 00:07:58.309 10:45:47 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:58.309 10:45:47 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:58.309 10:45:47 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:58.309 10:45:47 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 -r /var/tmp/spdk24.sock 00:07:58.309 [2024-12-15 10:45:47.196239] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:58.309 [2024-12-15 10:45:47.196302] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1317729 ] 00:07:58.309 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.568 [2024-12-15 10:45:47.448349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.568 [2024-12-15 10:45:47.537370] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:58.568 [2024-12-15 10:45:47.537498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.827 [2024-12-15 10:45:47.595445] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.827 [2024-12-15 10:45:47.611795] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:58.827 INFO: Running with entropic power schedule (0xFF, 100). 00:07:58.827 INFO: Seed: 2348362696 00:07:58.827 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:58.827 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:58.827 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:58.827 INFO: A corpus is not provided, starting from an empty corpus 00:07:58.827 #2 INITED exec/s: 0 rss: 60Mb 00:07:58.828 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:58.828 This may also happen if the target rejected all inputs we tried so far 00:07:58.828 [2024-12-15 10:45:47.656441] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.828 [2024-12-15 10:45:47.656475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.828 [2024-12-15 10:45:47.656510] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.828 [2024-12-15 10:45:47.656527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.828 [2024-12-15 10:45:47.656556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.828 [2024-12-15 10:45:47.656572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.087 NEW_FUNC[1/672]: 0x466208 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:59.087 NEW_FUNC[2/672]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:59.087 #11 NEW cov: 11670 ft: 11671 corp: 2/72b lim: 100 exec/s: 0 rss: 68Mb L: 71/71 MS: 4 CopyPart-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:07:59.087 [2024-12-15 10:45:47.977204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.087 [2024-12-15 10:45:47.977240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.087 [2024-12-15 10:45:47.977275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.087 [2024-12-15 10:45:47.977292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.087 [2024-12-15 10:45:47.977319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.087 [2024-12-15 10:45:47.977335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.087 #17 NEW cov: 11783 ft: 12084 corp: 3/143b lim: 100 exec/s: 0 rss: 69Mb L: 71/71 MS: 1 ChangeBinInt- 00:07:59.087 [2024-12-15 10:45:48.047232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.087 [2024-12-15 10:45:48.047264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.087 [2024-12-15 10:45:48.047299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.087 [2024-12-15 10:45:48.047316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.087 #33 NEW cov: 11789 ft: 12846 corp: 4/201b lim: 100 exec/s: 0 rss: 69Mb L: 58/71 MS: 1 EraseBytes- 00:07:59.351 [2024-12-15 10:45:48.107383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.351 [2024-12-15 10:45:48.107423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.351 [2024-12-15 10:45:48.107458] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.351 [2024-12-15 10:45:48.107475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.351 #34 NEW cov: 11874 ft: 13028 corp: 5/245b lim: 100 exec/s: 0 rss: 69Mb L: 44/71 MS: 1 EraseBytes- 00:07:59.351 [2024-12-15 10:45:48.157562] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.351 [2024-12-15 10:45:48.157591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.351 [2024-12-15 10:45:48.157624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.351 [2024-12-15 10:45:48.157641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.351 [2024-12-15 10:45:48.157669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.352 [2024-12-15 10:45:48.157685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.352 #35 NEW cov: 11874 ft: 13142 corp: 6/316b lim: 100 exec/s: 0 rss: 69Mb L: 71/71 MS: 1 CrossOver- 00:07:59.352 [2024-12-15 10:45:48.207666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.352 [2024-12-15 10:45:48.207696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.352 [2024-12-15 10:45:48.207731] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.352 [2024-12-15 10:45:48.207750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.352 #36 NEW cov: 11874 ft: 13208 corp: 7/360b lim: 100 exec/s: 0 rss: 69Mb L: 44/71 MS: 1 ChangeBit- 00:07:59.352 [2024-12-15 10:45:48.277852] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.352 [2024-12-15 10:45:48.277883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.352 [2024-12-15 10:45:48.277918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.352 [2024-12-15 10:45:48.277936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.352 #37 NEW cov: 11874 ft: 13252 corp: 8/404b lim: 100 exec/s: 0 rss: 69Mb L: 44/71 MS: 1 ChangeByte- 00:07:59.352 [2024-12-15 10:45:48.348079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.352 [2024-12-15 10:45:48.348110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.352 [2024-12-15 10:45:48.348144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.352 [2024-12-15 10:45:48.348162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.352 [2024-12-15 10:45:48.348191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744069414584575 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.352 [2024-12-15 10:45:48.348209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.612 #38 NEW cov: 11874 ft: 13298 corp: 9/471b lim: 100 exec/s: 0 rss: 69Mb L: 67/71 MS: 1 InsertRepeatedBytes- 00:07:59.612 [2024-12-15 10:45:48.398086] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.612 [2024-12-15 10:45:48.398121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.612 [2024-12-15 10:45:48.398154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.612 [2024-12-15 10:45:48.398171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.612 #39 NEW cov: 11874 ft: 13394 corp: 10/515b lim: 100 exec/s: 0 rss: 69Mb L: 44/71 MS: 1 ShuffleBytes- 00:07:59.612 [2024-12-15 10:45:48.448356] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.612 [2024-12-15 10:45:48.448387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.612 [2024-12-15 10:45:48.448428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:61938 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.612 [2024-12-15 10:45:48.448446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.612 [2024-12-15 10:45:48.448476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17433981653976478193 len:61938 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.612 [2024-12-15 10:45:48.448492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.612 [2024-12-15 10:45:48.448521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.612 [2024-12-15 10:45:48.448547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.612 #40 NEW cov: 11874 ft: 13890 corp: 11/599b lim: 100 exec/s: 0 rss: 69Mb L: 84/84 MS: 1 InsertRepeatedBytes- 00:07:59.612 [2024-12-15 10:45:48.518467] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.612 [2024-12-15 10:45:48.518498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.612 [2024-12-15 10:45:48.518542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446473593849118719 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.612 [2024-12-15 10:45:48.518559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.612 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:59.612 #42 NEW cov: 11891 ft: 13954 corp: 12/642b lim: 100 exec/s: 0 rss: 69Mb L: 43/84 MS: 2 ShuffleBytes-CrossOver- 00:07:59.612 [2024-12-15 10:45:48.578652] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446602232599150591 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.612 [2024-12-15 10:45:48.578682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.612 [2024-12-15 10:45:48.578713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4278190080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.612 [2024-12-15 10:45:48.578730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.612 [2024-12-15 10:45:48.578758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744069414584320 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.612 [2024-12-15 10:45:48.578774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.872 #43 NEW cov: 11891 ft: 13999 corp: 13/710b lim: 100 exec/s: 0 rss: 69Mb L: 68/84 MS: 1 InsertByte- 00:07:59.872 [2024-12-15 10:45:48.648817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:10923366096936933271 len:38808 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.872 [2024-12-15 10:45:48.648846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.872 [2024-12-15 10:45:48.648879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10923366098549577623 len:38808 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.872 [2024-12-15 10:45:48.648897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.872 #46 NEW cov: 11891 ft: 14013 corp: 14/767b lim: 100 exec/s: 46 rss: 69Mb L: 57/84 MS: 3 CMP-ChangeBit-InsertRepeatedBytes- DE: "7x*\206\346\215\004\000"- 00:07:59.872 [2024-12-15 10:45:48.698994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.872 [2024-12-15 10:45:48.699024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.872 [2024-12-15 10:45:48.699055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:61938 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.872 [2024-12-15 10:45:48.699072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.872 [2024-12-15 10:45:48.699100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17433981653976478193 len:61938 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.872 [2024-12-15 10:45:48.699115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.872 [2024-12-15 10:45:48.699141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.872 [2024-12-15 10:45:48.699157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.872 #47 NEW cov: 11891 ft: 14102 corp: 15/852b lim: 100 exec/s: 47 rss: 69Mb L: 85/85 MS: 1 InsertByte- 00:07:59.872 [2024-12-15 10:45:48.769106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:10923366096936933271 len:38808 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.872 [2024-12-15 10:45:48.769136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.872 [2024-12-15 10:45:48.769168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10923366098549577623 len:38808 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.872 [2024-12-15 10:45:48.769185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.872 #48 NEW cov: 11891 ft: 14166 corp: 16/909b lim: 100 exec/s: 48 rss: 70Mb L: 57/85 MS: 1 ShuffleBytes- 00:07:59.872 [2024-12-15 10:45:48.839288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.872 [2024-12-15 10:45:48.839318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.872 [2024-12-15 10:45:48.839351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.872 [2024-12-15 10:45:48.839367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.872 #49 NEW cov: 11891 ft: 14170 corp: 17/953b lim: 100 exec/s: 49 rss: 70Mb L: 44/85 MS: 1 ShuffleBytes- 00:08:00.131 [2024-12-15 10:45:48.889438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.131 [2024-12-15 10:45:48.889478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.131 [2024-12-15 10:45:48.889512] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.131 [2024-12-15 10:45:48.889530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.131 #50 NEW cov: 11891 ft: 14223 corp: 18/998b lim: 100 exec/s: 50 rss: 70Mb L: 45/85 MS: 1 InsertByte- 00:08:00.131 [2024-12-15 10:45:48.960358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446602232599150591 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.131 [2024-12-15 10:45:48.960387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.131 [2024-12-15 10:45:48.960426] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4278190080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.131 [2024-12-15 10:45:48.960442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.131 [2024-12-15 10:45:48.960494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:72057589742960640 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.131 [2024-12-15 10:45:48.960509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.131 #51 NEW cov: 11891 ft: 14322 corp: 19/1067b lim: 100 exec/s: 51 rss: 70Mb L: 69/85 MS: 1 InsertByte- 00:08:00.131 [2024-12-15 10:45:49.000366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.131 [2024-12-15 10:45:49.000394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.131 [2024-12-15 10:45:49.000448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.131 [2024-12-15 10:45:49.000464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.131 #52 NEW cov: 11891 ft: 14400 corp: 20/1112b lim: 100 exec/s: 52 rss: 70Mb L: 45/85 MS: 1 ChangeBinInt- 00:08:00.131 [2024-12-15 10:45:49.040627] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65288 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.131 [2024-12-15 10:45:49.040655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.131 [2024-12-15 10:45:49.040705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.131 [2024-12-15 10:45:49.040720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.131 [2024-12-15 10:45:49.040773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.131 [2024-12-15 10:45:49.040788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.131 #53 NEW cov: 11891 ft: 14479 corp: 21/1183b lim: 100 exec/s: 53 rss: 70Mb L: 71/85 MS: 1 ChangeByte- 00:08:00.132 [2024-12-15 10:45:49.080730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.132 [2024-12-15 10:45:49.080760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.132 [2024-12-15 10:45:49.080812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4503599627370496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.132 [2024-12-15 10:45:49.080827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.132 [2024-12-15 10:45:49.080881] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744069414584575 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.132 [2024-12-15 10:45:49.080896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.132 #54 NEW cov: 11891 ft: 14554 corp: 22/1250b lim: 100 exec/s: 54 rss: 70Mb L: 67/85 MS: 1 ChangeBit- 00:08:00.132 [2024-12-15 10:45:49.120829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.132 [2024-12-15 10:45:49.120856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.132 [2024-12-15 10:45:49.120893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.132 [2024-12-15 10:45:49.120908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.132 [2024-12-15 10:45:49.120961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446743390809751551 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.132 [2024-12-15 10:45:49.120977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.391 #55 NEW cov: 11891 ft: 14585 corp: 23/1314b lim: 100 exec/s: 55 rss: 70Mb L: 64/85 MS: 1 InsertRepeatedBytes- 00:08:00.391 [2024-12-15 10:45:49.160830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.391 [2024-12-15 10:45:49.160857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.391 [2024-12-15 10:45:49.160905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.391 [2024-12-15 10:45:49.160920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.391 #56 NEW cov: 11891 ft: 14588 corp: 24/1366b lim: 100 exec/s: 56 rss: 70Mb L: 52/85 MS: 1 EraseBytes- 00:08:00.391 [2024-12-15 10:45:49.201071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446602232599150591 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.391 [2024-12-15 10:45:49.201098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.391 [2024-12-15 10:45:49.201146] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4278190080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.391 [2024-12-15 10:45:49.201161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.391 [2024-12-15 10:45:49.201214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744069414584320 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.391 [2024-12-15 10:45:49.201229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.391 #57 NEW cov: 11891 ft: 14609 corp: 25/1434b lim: 100 exec/s: 57 rss: 70Mb L: 68/85 MS: 1 ShuffleBytes- 00:08:00.391 [2024-12-15 10:45:49.241312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.391 [2024-12-15 10:45:49.241339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.391 [2024-12-15 10:45:49.241377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:61938 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.391 [2024-12-15 10:45:49.241392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.391 [2024-12-15 10:45:49.241459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17433981653976478193 len:61938 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.391 [2024-12-15 10:45:49.241474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.391 [2024-12-15 10:45:49.241526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.391 [2024-12-15 10:45:49.241541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.391 #58 NEW cov: 11891 ft: 14630 corp: 26/1519b lim: 100 exec/s: 58 rss: 70Mb L: 85/85 MS: 1 ShuffleBytes- 00:08:00.391 [2024-12-15 10:45:49.281480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.391 [2024-12-15 10:45:49.281507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.391 [2024-12-15 10:45:49.281544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.391 [2024-12-15 10:45:49.281559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.391 [2024-12-15 10:45:49.281611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.391 [2024-12-15 10:45:49.281626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.391 [2024-12-15 10:45:49.281678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.391 [2024-12-15 10:45:49.281693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.391 #59 NEW cov: 11891 ft: 14635 corp: 27/1602b lim: 100 exec/s: 59 rss: 70Mb L: 83/85 MS: 1 InsertRepeatedBytes- 00:08:00.391 [2024-12-15 10:45:49.321411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446602232582439679 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.391 [2024-12-15 10:45:49.321442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.391 [2024-12-15 10:45:49.321481] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4278190080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.391 [2024-12-15 10:45:49.321497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.392 [2024-12-15 10:45:49.321549] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:72057589742960640 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.392 [2024-12-15 10:45:49.321565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.392 #60 NEW cov: 11891 ft: 14655 corp: 28/1671b lim: 100 exec/s: 60 rss: 70Mb L: 69/85 MS: 1 CMP- DE: "\001\002"- 00:08:00.392 [2024-12-15 10:45:49.361425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446601137382490111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.392 [2024-12-15 10:45:49.361452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.392 [2024-12-15 10:45:49.361505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744069414584320 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.392 [2024-12-15 10:45:49.361521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.392 #61 NEW cov: 11891 ft: 14676 corp: 29/1719b lim: 100 exec/s: 61 rss: 70Mb L: 48/85 MS: 1 EraseBytes- 00:08:00.392 [2024-12-15 10:45:49.401536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.392 [2024-12-15 10:45:49.401564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.392 [2024-12-15 10:45:49.401629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.392 [2024-12-15 10:45:49.401646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.651 #62 NEW cov: 11891 ft: 14702 corp: 30/1771b lim: 100 exec/s: 62 rss: 70Mb L: 52/85 MS: 1 ChangeByte- 00:08:00.651 [2024-12-15 10:45:49.441931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.651 [2024-12-15 10:45:49.441960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.651 [2024-12-15 10:45:49.441996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:61938 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.651 [2024-12-15 10:45:49.442011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.651 [2024-12-15 10:45:49.442063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17433981653976478193 len:61938 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.651 [2024-12-15 10:45:49.442078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.651 [2024-12-15 10:45:49.442132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.651 [2024-12-15 10:45:49.442146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.651 #63 NEW cov: 11891 ft: 14722 corp: 31/1857b lim: 100 exec/s: 63 rss: 70Mb L: 86/86 MS: 1 InsertByte- 00:08:00.651 [2024-12-15 10:45:49.481733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.651 [2024-12-15 10:45:49.481759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.651 [2024-12-15 10:45:49.481811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.651 [2024-12-15 10:45:49.481827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.651 #64 NEW cov: 11891 ft: 14727 corp: 32/1915b lim: 100 exec/s: 64 rss: 70Mb L: 58/86 MS: 1 ChangeByte- 00:08:00.651 [2024-12-15 10:45:49.521705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.651 [2024-12-15 10:45:49.521736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.651 #65 NEW cov: 11898 ft: 15571 corp: 33/1952b lim: 100 exec/s: 65 rss: 70Mb L: 37/86 MS: 1 EraseBytes- 00:08:00.651 [2024-12-15 10:45:49.561980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:10923366096936933271 len:38808 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.651 [2024-12-15 10:45:49.562007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.651 [2024-12-15 10:45:49.562055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10923366098549577607 len:38808 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.651 [2024-12-15 10:45:49.562072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.651 #66 NEW cov: 11898 ft: 15637 corp: 34/2009b lim: 100 exec/s: 66 rss: 70Mb L: 57/86 MS: 1 ChangeBit- 00:08:00.651 [2024-12-15 10:45:49.602094] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.651 [2024-12-15 10:45:49.602122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.651 [2024-12-15 10:45:49.602168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.651 [2024-12-15 10:45:49.602183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.651 #67 NEW cov: 11898 ft: 15651 corp: 35/2053b lim: 100 exec/s: 67 rss: 70Mb L: 44/86 MS: 1 ChangeBinInt- 00:08:00.651 [2024-12-15 10:45:49.642211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.651 [2024-12-15 10:45:49.642238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.651 [2024-12-15 10:45:49.642290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.651 [2024-12-15 10:45:49.642306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.911 #68 NEW cov: 11898 ft: 15668 corp: 36/2111b lim: 100 exec/s: 34 rss: 70Mb L: 58/86 MS: 1 ChangeBit- 00:08:00.911 #68 DONE cov: 11898 ft: 15668 corp: 36/2111b lim: 100 exec/s: 34 rss: 70Mb 00:08:00.911 ###### Recommended dictionary. ###### 00:08:00.911 "7x*\206\346\215\004\000" # Uses: 0 00:08:00.911 "\001\002" # Uses: 0 00:08:00.911 ###### End of recommended dictionary. ###### 00:08:00.911 Done 68 runs in 2 second(s) 00:08:00.911 10:45:49 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_24.conf 00:08:00.911 10:45:49 -- ../common.sh@72 -- # (( i++ )) 00:08:00.911 10:45:49 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:00.911 10:45:49 -- nvmf/run.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:08:00.911 00:08:00.911 real 1m5.284s 00:08:00.911 user 1m40.845s 00:08:00.911 sys 0m7.974s 00:08:00.911 10:45:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:00.911 10:45:49 -- common/autotest_common.sh@10 -- # set +x 00:08:00.911 ************************************ 00:08:00.911 END TEST nvmf_fuzz 00:08:00.911 ************************************ 00:08:00.911 10:45:49 -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:08:00.911 10:45:49 -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:08:00.911 10:45:49 -- fuzz/llvm.sh@20 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:00.911 10:45:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:00.911 10:45:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.911 10:45:49 -- common/autotest_common.sh@10 -- # set +x 00:08:00.911 ************************************ 00:08:00.911 START TEST vfio_fuzz 00:08:00.911 ************************************ 00:08:00.911 10:45:49 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:01.173 * Looking for test storage... 00:08:01.173 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:01.173 10:45:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:01.173 10:45:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:01.173 10:45:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:01.173 10:45:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:01.173 10:45:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:01.173 10:45:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:01.173 10:45:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:01.173 10:45:50 -- scripts/common.sh@335 -- # IFS=.-: 00:08:01.173 10:45:50 -- scripts/common.sh@335 -- # read -ra ver1 00:08:01.173 10:45:50 -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.173 10:45:50 -- scripts/common.sh@336 -- # read -ra ver2 00:08:01.173 10:45:50 -- scripts/common.sh@337 -- # local 'op=<' 00:08:01.173 10:45:50 -- scripts/common.sh@339 -- # ver1_l=2 00:08:01.173 10:45:50 -- scripts/common.sh@340 -- # ver2_l=1 00:08:01.173 10:45:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:01.173 10:45:50 -- scripts/common.sh@343 -- # case "$op" in 00:08:01.173 10:45:50 -- scripts/common.sh@344 -- # : 1 00:08:01.173 10:45:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:01.173 10:45:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.173 10:45:50 -- scripts/common.sh@364 -- # decimal 1 00:08:01.173 10:45:50 -- scripts/common.sh@352 -- # local d=1 00:08:01.173 10:45:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.173 10:45:50 -- scripts/common.sh@354 -- # echo 1 00:08:01.173 10:45:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:01.173 10:45:50 -- scripts/common.sh@365 -- # decimal 2 00:08:01.173 10:45:50 -- scripts/common.sh@352 -- # local d=2 00:08:01.173 10:45:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.173 10:45:50 -- scripts/common.sh@354 -- # echo 2 00:08:01.173 10:45:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:01.173 10:45:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:01.173 10:45:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:01.173 10:45:50 -- scripts/common.sh@367 -- # return 0 00:08:01.173 10:45:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.173 10:45:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:01.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.173 --rc genhtml_branch_coverage=1 00:08:01.173 --rc genhtml_function_coverage=1 00:08:01.173 --rc genhtml_legend=1 00:08:01.173 --rc geninfo_all_blocks=1 00:08:01.173 --rc geninfo_unexecuted_blocks=1 00:08:01.173 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:01.173 ' 00:08:01.173 10:45:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:01.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.173 --rc genhtml_branch_coverage=1 00:08:01.173 --rc genhtml_function_coverage=1 00:08:01.173 --rc genhtml_legend=1 00:08:01.173 --rc geninfo_all_blocks=1 00:08:01.173 --rc geninfo_unexecuted_blocks=1 00:08:01.173 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:01.173 ' 00:08:01.173 10:45:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:01.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.173 --rc genhtml_branch_coverage=1 00:08:01.173 --rc genhtml_function_coverage=1 00:08:01.173 --rc genhtml_legend=1 00:08:01.173 --rc geninfo_all_blocks=1 00:08:01.173 --rc geninfo_unexecuted_blocks=1 00:08:01.173 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:01.173 ' 00:08:01.173 10:45:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:01.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.173 --rc genhtml_branch_coverage=1 00:08:01.173 --rc genhtml_function_coverage=1 00:08:01.173 --rc genhtml_legend=1 00:08:01.173 --rc geninfo_all_blocks=1 00:08:01.173 --rc geninfo_unexecuted_blocks=1 00:08:01.173 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:01.173 ' 00:08:01.173 10:45:50 -- vfio/run.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:01.173 10:45:50 -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:01.173 10:45:50 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:01.173 10:45:50 -- common/autotest_common.sh@34 -- # set -e 00:08:01.173 10:45:50 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:01.173 10:45:50 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:01.173 10:45:50 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:01.173 10:45:50 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:01.173 10:45:50 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:01.173 10:45:50 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:01.173 10:45:50 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:01.173 10:45:50 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:01.173 10:45:50 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:01.173 10:45:50 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:01.173 10:45:50 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:01.173 10:45:50 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:01.173 10:45:50 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:01.173 10:45:50 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:01.173 10:45:50 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:01.173 10:45:50 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:01.173 10:45:50 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:01.173 10:45:50 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:01.173 10:45:50 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:01.173 10:45:50 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:01.173 10:45:50 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:01.173 10:45:50 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:01.173 10:45:50 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:01.173 10:45:50 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:01.173 10:45:50 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:01.173 10:45:50 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:01.173 10:45:50 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:01.173 10:45:50 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:01.173 10:45:50 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:01.173 10:45:50 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:01.173 10:45:50 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:01.173 10:45:50 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:01.173 10:45:50 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:01.173 10:45:50 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:01.173 10:45:50 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:01.173 10:45:50 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:01.173 10:45:50 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:01.173 10:45:50 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:01.173 10:45:50 -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:01.173 10:45:50 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:01.173 10:45:50 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:01.173 10:45:50 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:01.173 10:45:50 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:01.173 10:45:50 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:01.173 10:45:50 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:01.173 10:45:50 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:01.173 10:45:50 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:01.173 10:45:50 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:01.173 10:45:50 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:01.173 10:45:50 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:01.173 10:45:50 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:01.173 10:45:50 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:01.173 10:45:50 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:01.173 10:45:50 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:01.173 10:45:50 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:01.173 10:45:50 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:01.173 10:45:50 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:01.173 10:45:50 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:01.173 10:45:50 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:01.173 10:45:50 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:01.173 10:45:50 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:01.173 10:45:50 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:01.173 10:45:50 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:01.173 10:45:50 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:01.173 10:45:50 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:08:01.173 10:45:50 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:01.173 10:45:50 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:01.173 10:45:50 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:08:01.173 10:45:50 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:01.173 10:45:50 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:01.173 10:45:50 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:01.173 10:45:50 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:01.173 10:45:50 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:01.173 10:45:50 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:01.173 10:45:50 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:01.173 10:45:50 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:01.173 10:45:50 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:01.173 10:45:50 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:01.173 10:45:50 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:01.174 10:45:50 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:01.174 10:45:50 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:01.174 10:45:50 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:01.174 10:45:50 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:01.174 10:45:50 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:01.174 10:45:50 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:01.174 10:45:50 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:01.174 10:45:50 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:01.174 10:45:50 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:01.174 10:45:50 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:01.174 10:45:50 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:01.174 10:45:50 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:01.174 10:45:50 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:01.174 10:45:50 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:01.174 10:45:50 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:01.174 10:45:50 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:01.174 10:45:50 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:01.174 10:45:50 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:01.174 10:45:50 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:01.174 10:45:50 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:01.174 #define SPDK_CONFIG_H 00:08:01.174 #define SPDK_CONFIG_APPS 1 00:08:01.174 #define SPDK_CONFIG_ARCH native 00:08:01.174 #undef SPDK_CONFIG_ASAN 00:08:01.174 #undef SPDK_CONFIG_AVAHI 00:08:01.174 #undef SPDK_CONFIG_CET 00:08:01.174 #define SPDK_CONFIG_COVERAGE 1 00:08:01.174 #define SPDK_CONFIG_CROSS_PREFIX 00:08:01.174 #undef SPDK_CONFIG_CRYPTO 00:08:01.174 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:01.174 #undef SPDK_CONFIG_CUSTOMOCF 00:08:01.174 #undef SPDK_CONFIG_DAOS 00:08:01.174 #define SPDK_CONFIG_DAOS_DIR 00:08:01.174 #define SPDK_CONFIG_DEBUG 1 00:08:01.174 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:01.174 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:01.174 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:01.174 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:01.174 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:01.174 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:01.174 #define SPDK_CONFIG_EXAMPLES 1 00:08:01.174 #undef SPDK_CONFIG_FC 00:08:01.174 #define SPDK_CONFIG_FC_PATH 00:08:01.174 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:01.174 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:01.174 #undef SPDK_CONFIG_FUSE 00:08:01.174 #define SPDK_CONFIG_FUZZER 1 00:08:01.174 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:01.174 #undef SPDK_CONFIG_GOLANG 00:08:01.174 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:01.174 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:01.174 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:01.174 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:01.174 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:01.174 #define SPDK_CONFIG_IDXD 1 00:08:01.174 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:01.174 #undef SPDK_CONFIG_IPSEC_MB 00:08:01.174 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:01.174 #define SPDK_CONFIG_ISAL 1 00:08:01.174 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:01.174 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:01.174 #define SPDK_CONFIG_LIBDIR 00:08:01.174 #undef SPDK_CONFIG_LTO 00:08:01.174 #define SPDK_CONFIG_MAX_LCORES 00:08:01.174 #define SPDK_CONFIG_NVME_CUSE 1 00:08:01.174 #undef SPDK_CONFIG_OCF 00:08:01.174 #define SPDK_CONFIG_OCF_PATH 00:08:01.174 #define SPDK_CONFIG_OPENSSL_PATH 00:08:01.174 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:01.174 #undef SPDK_CONFIG_PGO_USE 00:08:01.174 #define SPDK_CONFIG_PREFIX /usr/local 00:08:01.174 #undef SPDK_CONFIG_RAID5F 00:08:01.174 #undef SPDK_CONFIG_RBD 00:08:01.174 #define SPDK_CONFIG_RDMA 1 00:08:01.174 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:01.174 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:01.174 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:01.174 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:01.174 #undef SPDK_CONFIG_SHARED 00:08:01.174 #undef SPDK_CONFIG_SMA 00:08:01.174 #define SPDK_CONFIG_TESTS 1 00:08:01.174 #undef SPDK_CONFIG_TSAN 00:08:01.174 #define SPDK_CONFIG_UBLK 1 00:08:01.174 #define SPDK_CONFIG_UBSAN 1 00:08:01.174 #undef SPDK_CONFIG_UNIT_TESTS 00:08:01.174 #undef SPDK_CONFIG_URING 00:08:01.174 #define SPDK_CONFIG_URING_PATH 00:08:01.174 #undef SPDK_CONFIG_URING_ZNS 00:08:01.174 #undef SPDK_CONFIG_USDT 00:08:01.174 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:01.174 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:01.174 #define SPDK_CONFIG_VFIO_USER 1 00:08:01.174 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:01.174 #define SPDK_CONFIG_VHOST 1 00:08:01.174 #define SPDK_CONFIG_VIRTIO 1 00:08:01.174 #undef SPDK_CONFIG_VTUNE 00:08:01.174 #define SPDK_CONFIG_VTUNE_DIR 00:08:01.174 #define SPDK_CONFIG_WERROR 1 00:08:01.174 #define SPDK_CONFIG_WPDK_DIR 00:08:01.174 #undef SPDK_CONFIG_XNVME 00:08:01.174 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:01.174 10:45:50 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:01.174 10:45:50 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:01.174 10:45:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.174 10:45:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.174 10:45:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.174 10:45:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.174 10:45:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.174 10:45:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.174 10:45:50 -- paths/export.sh@5 -- # export PATH 00:08:01.174 10:45:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.174 10:45:50 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:01.174 10:45:50 -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:01.174 10:45:50 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:01.174 10:45:50 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:01.174 10:45:50 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:01.174 10:45:50 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:01.174 10:45:50 -- pm/common@16 -- # TEST_TAG=N/A 00:08:01.174 10:45:50 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:01.174 10:45:50 -- common/autotest_common.sh@52 -- # : 1 00:08:01.174 10:45:50 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:01.174 10:45:50 -- common/autotest_common.sh@56 -- # : 0 00:08:01.174 10:45:50 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:01.174 10:45:50 -- common/autotest_common.sh@58 -- # : 0 00:08:01.174 10:45:50 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:01.174 10:45:50 -- common/autotest_common.sh@60 -- # : 1 00:08:01.174 10:45:50 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:01.174 10:45:50 -- common/autotest_common.sh@62 -- # : 0 00:08:01.174 10:45:50 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:01.174 10:45:50 -- common/autotest_common.sh@64 -- # : 00:08:01.174 10:45:50 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:01.174 10:45:50 -- common/autotest_common.sh@66 -- # : 0 00:08:01.174 10:45:50 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:01.174 10:45:50 -- common/autotest_common.sh@68 -- # : 0 00:08:01.174 10:45:50 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:01.174 10:45:50 -- common/autotest_common.sh@70 -- # : 0 00:08:01.174 10:45:50 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:01.174 10:45:50 -- common/autotest_common.sh@72 -- # : 0 00:08:01.174 10:45:50 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:01.174 10:45:50 -- common/autotest_common.sh@74 -- # : 0 00:08:01.174 10:45:50 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:01.174 10:45:50 -- common/autotest_common.sh@76 -- # : 0 00:08:01.174 10:45:50 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:01.174 10:45:50 -- common/autotest_common.sh@78 -- # : 0 00:08:01.174 10:45:50 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:01.174 10:45:50 -- common/autotest_common.sh@80 -- # : 0 00:08:01.174 10:45:50 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:01.174 10:45:50 -- common/autotest_common.sh@82 -- # : 0 00:08:01.174 10:45:50 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:01.174 10:45:50 -- common/autotest_common.sh@84 -- # : 0 00:08:01.174 10:45:50 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:01.174 10:45:50 -- common/autotest_common.sh@86 -- # : 0 00:08:01.174 10:45:50 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:01.174 10:45:50 -- common/autotest_common.sh@88 -- # : 0 00:08:01.174 10:45:50 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:01.174 10:45:50 -- common/autotest_common.sh@90 -- # : 0 00:08:01.174 10:45:50 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:01.175 10:45:50 -- common/autotest_common.sh@92 -- # : 1 00:08:01.175 10:45:50 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:01.175 10:45:50 -- common/autotest_common.sh@94 -- # : 1 00:08:01.175 10:45:50 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:01.175 10:45:50 -- common/autotest_common.sh@96 -- # : rdma 00:08:01.175 10:45:50 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:01.175 10:45:50 -- common/autotest_common.sh@98 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:01.175 10:45:50 -- common/autotest_common.sh@100 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:01.175 10:45:50 -- common/autotest_common.sh@102 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:01.175 10:45:50 -- common/autotest_common.sh@104 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:01.175 10:45:50 -- common/autotest_common.sh@106 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:01.175 10:45:50 -- common/autotest_common.sh@108 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:01.175 10:45:50 -- common/autotest_common.sh@110 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:01.175 10:45:50 -- common/autotest_common.sh@112 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:01.175 10:45:50 -- common/autotest_common.sh@114 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:01.175 10:45:50 -- common/autotest_common.sh@116 -- # : 1 00:08:01.175 10:45:50 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:01.175 10:45:50 -- common/autotest_common.sh@118 -- # : 00:08:01.175 10:45:50 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:01.175 10:45:50 -- common/autotest_common.sh@120 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:01.175 10:45:50 -- common/autotest_common.sh@122 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:01.175 10:45:50 -- common/autotest_common.sh@124 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:01.175 10:45:50 -- common/autotest_common.sh@126 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:01.175 10:45:50 -- common/autotest_common.sh@128 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:01.175 10:45:50 -- common/autotest_common.sh@130 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:01.175 10:45:50 -- common/autotest_common.sh@132 -- # : 00:08:01.175 10:45:50 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:01.175 10:45:50 -- common/autotest_common.sh@134 -- # : true 00:08:01.175 10:45:50 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:01.175 10:45:50 -- common/autotest_common.sh@136 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:01.175 10:45:50 -- common/autotest_common.sh@138 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:01.175 10:45:50 -- common/autotest_common.sh@140 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:01.175 10:45:50 -- common/autotest_common.sh@142 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:01.175 10:45:50 -- common/autotest_common.sh@144 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:01.175 10:45:50 -- common/autotest_common.sh@146 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:01.175 10:45:50 -- common/autotest_common.sh@148 -- # : 00:08:01.175 10:45:50 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:01.175 10:45:50 -- common/autotest_common.sh@150 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:01.175 10:45:50 -- common/autotest_common.sh@152 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:01.175 10:45:50 -- common/autotest_common.sh@154 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:01.175 10:45:50 -- common/autotest_common.sh@156 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:01.175 10:45:50 -- common/autotest_common.sh@158 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:01.175 10:45:50 -- common/autotest_common.sh@160 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:01.175 10:45:50 -- common/autotest_common.sh@163 -- # : 00:08:01.175 10:45:50 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:01.175 10:45:50 -- common/autotest_common.sh@165 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:01.175 10:45:50 -- common/autotest_common.sh@167 -- # : 0 00:08:01.175 10:45:50 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:01.175 10:45:50 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:01.175 10:45:50 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:01.175 10:45:50 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:01.175 10:45:50 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:01.175 10:45:50 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:01.175 10:45:50 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:01.175 10:45:50 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:01.175 10:45:50 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:01.175 10:45:50 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:01.175 10:45:50 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:01.175 10:45:50 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:01.175 10:45:50 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:01.175 10:45:50 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:01.175 10:45:50 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:01.175 10:45:50 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:01.175 10:45:50 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:01.175 10:45:50 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:01.175 10:45:50 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:01.175 10:45:50 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:01.175 10:45:50 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:01.175 10:45:50 -- common/autotest_common.sh@196 -- # cat 00:08:01.175 10:45:50 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:01.175 10:45:50 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:01.175 10:45:50 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:01.175 10:45:50 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:01.175 10:45:50 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:01.175 10:45:50 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:01.175 10:45:50 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:01.175 10:45:50 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:01.175 10:45:50 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:01.175 10:45:50 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:01.175 10:45:50 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:01.175 10:45:50 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:01.175 10:45:50 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:01.175 10:45:50 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:01.175 10:45:50 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:01.175 10:45:50 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:01.175 10:45:50 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:01.175 10:45:50 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:01.175 10:45:50 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:01.175 10:45:50 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:01.175 10:45:50 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:01.176 10:45:50 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:01.176 10:45:50 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:01.176 10:45:50 -- common/autotest_common.sh@250 -- # [[ 1 -eq 1 ]] 00:08:01.176 10:45:50 -- common/autotest_common.sh@250 -- # _LCOV=1 00:08:01.176 10:45:50 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:01.176 10:45:50 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:01.176 10:45:50 -- common/autotest_common.sh@255 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:01.176 10:45:50 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:01.176 10:45:50 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:01.176 10:45:50 -- common/autotest_common.sh@259 -- # valgrind= 00:08:01.176 10:45:50 -- common/autotest_common.sh@265 -- # uname -s 00:08:01.176 10:45:50 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:01.176 10:45:50 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:01.176 10:45:50 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:01.176 10:45:50 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:01.176 10:45:50 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:01.176 10:45:50 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:01.176 10:45:50 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:01.176 10:45:50 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j112 00:08:01.176 10:45:50 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:01.176 10:45:50 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:01.176 10:45:50 -- common/autotest_common.sh@294 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:01.176 10:45:50 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:01.176 10:45:50 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:01.176 10:45:50 -- common/autotest_common.sh@319 -- # [[ -z 1318114 ]] 00:08:01.176 10:45:50 -- common/autotest_common.sh@319 -- # kill -0 1318114 00:08:01.176 10:45:50 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:01.176 10:45:50 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:01.176 10:45:50 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:01.176 10:45:50 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:01.176 10:45:50 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:01.176 10:45:50 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:01.176 10:45:50 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:01.176 10:45:50 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:01.176 10:45:50 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.1RxtlP 00:08:01.176 10:45:50 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:01.176 10:45:50 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:01.176 10:45:50 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:01.176 10:45:50 -- common/autotest_common.sh@356 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.1RxtlP/tests/vfio /tmp/spdk.1RxtlP 00:08:01.176 10:45:50 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:01.176 10:45:50 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:01.176 10:45:50 -- common/autotest_common.sh@328 -- # df -T 00:08:01.176 10:45:50 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:01.176 10:45:50 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_devtmpfs 00:08:01.176 10:45:50 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:01.176 10:45:50 -- common/autotest_common.sh@363 -- # avails["$mount"]=67108864 00:08:01.176 10:45:50 -- common/autotest_common.sh@363 -- # sizes["$mount"]=67108864 00:08:01.176 10:45:50 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:01.176 10:45:50 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:01.176 10:45:50 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/pmem0 00:08:01.176 10:45:50 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext2 00:08:01.176 10:45:50 -- common/autotest_common.sh@363 -- # avails["$mount"]=785162240 00:08:01.176 10:45:50 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5284429824 00:08:01.176 10:45:50 -- common/autotest_common.sh@364 -- # uses["$mount"]=4499267584 00:08:01.176 10:45:50 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:01.176 10:45:50 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_root 00:08:01.176 10:45:50 -- common/autotest_common.sh@362 -- # fss["$mount"]=overlay 00:08:01.176 10:45:50 -- common/autotest_common.sh@363 -- # avails["$mount"]=54547345408 00:08:01.176 10:45:50 -- common/autotest_common.sh@363 -- # sizes["$mount"]=61730594816 00:08:01.176 10:45:50 -- common/autotest_common.sh@364 -- # uses["$mount"]=7183249408 00:08:01.176 10:45:50 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:01.176 10:45:50 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:01.176 10:45:50 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:01.176 10:45:50 -- common/autotest_common.sh@363 -- # avails["$mount"]=30864039936 00:08:01.176 10:45:50 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865297408 00:08:01.176 10:45:50 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:08:01.176 10:45:50 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:01.176 10:45:50 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:01.176 10:45:50 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:01.176 10:45:50 -- common/autotest_common.sh@363 -- # avails["$mount"]=12340121600 00:08:01.176 10:45:50 -- common/autotest_common.sh@363 -- # sizes["$mount"]=12346122240 00:08:01.176 10:45:50 -- common/autotest_common.sh@364 -- # uses["$mount"]=6000640 00:08:01.176 10:45:50 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:01.176 10:45:50 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:01.176 10:45:50 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:01.176 10:45:50 -- common/autotest_common.sh@363 -- # avails["$mount"]=30865096704 00:08:01.176 10:45:50 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865297408 00:08:01.176 10:45:50 -- common/autotest_common.sh@364 -- # uses["$mount"]=200704 00:08:01.176 10:45:50 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:01.176 10:45:50 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:01.176 10:45:50 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:01.176 10:45:50 -- common/autotest_common.sh@363 -- # avails["$mount"]=6173044736 00:08:01.176 10:45:50 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6173057024 00:08:01.176 10:45:50 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:01.176 10:45:50 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:01.176 10:45:50 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:01.176 * Looking for test storage... 00:08:01.176 10:45:50 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:01.176 10:45:50 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:01.176 10:45:50 -- common/autotest_common.sh@373 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:01.176 10:45:50 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:01.176 10:45:50 -- common/autotest_common.sh@373 -- # mount=/ 00:08:01.176 10:45:50 -- common/autotest_common.sh@375 -- # target_space=54547345408 00:08:01.176 10:45:50 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:01.176 10:45:50 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:01.176 10:45:50 -- common/autotest_common.sh@381 -- # [[ overlay == tmpfs ]] 00:08:01.176 10:45:50 -- common/autotest_common.sh@381 -- # [[ overlay == ramfs ]] 00:08:01.176 10:45:50 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:08:01.176 10:45:50 -- common/autotest_common.sh@382 -- # new_size=9397841920 00:08:01.176 10:45:50 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:01.176 10:45:50 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:01.176 10:45:50 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:01.176 10:45:50 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:01.176 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:01.176 10:45:50 -- common/autotest_common.sh@390 -- # return 0 00:08:01.176 10:45:50 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:01.176 10:45:50 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:01.176 10:45:50 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:01.176 10:45:50 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:01.176 10:45:50 -- common/autotest_common.sh@1682 -- # true 00:08:01.176 10:45:50 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:01.176 10:45:50 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:01.176 10:45:50 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:01.176 10:45:50 -- common/autotest_common.sh@27 -- # exec 00:08:01.176 10:45:50 -- common/autotest_common.sh@29 -- # exec 00:08:01.176 10:45:50 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:01.176 10:45:50 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:01.176 10:45:50 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:01.176 10:45:50 -- common/autotest_common.sh@18 -- # set -x 00:08:01.176 10:45:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:01.176 10:45:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:01.176 10:45:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:01.436 10:45:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:01.436 10:45:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:01.436 10:45:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:01.436 10:45:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:01.436 10:45:50 -- scripts/common.sh@335 -- # IFS=.-: 00:08:01.436 10:45:50 -- scripts/common.sh@335 -- # read -ra ver1 00:08:01.436 10:45:50 -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.436 10:45:50 -- scripts/common.sh@336 -- # read -ra ver2 00:08:01.436 10:45:50 -- scripts/common.sh@337 -- # local 'op=<' 00:08:01.436 10:45:50 -- scripts/common.sh@339 -- # ver1_l=2 00:08:01.436 10:45:50 -- scripts/common.sh@340 -- # ver2_l=1 00:08:01.436 10:45:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:01.436 10:45:50 -- scripts/common.sh@343 -- # case "$op" in 00:08:01.436 10:45:50 -- scripts/common.sh@344 -- # : 1 00:08:01.436 10:45:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:01.436 10:45:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.436 10:45:50 -- scripts/common.sh@364 -- # decimal 1 00:08:01.436 10:45:50 -- scripts/common.sh@352 -- # local d=1 00:08:01.436 10:45:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.436 10:45:50 -- scripts/common.sh@354 -- # echo 1 00:08:01.436 10:45:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:01.436 10:45:50 -- scripts/common.sh@365 -- # decimal 2 00:08:01.436 10:45:50 -- scripts/common.sh@352 -- # local d=2 00:08:01.436 10:45:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.436 10:45:50 -- scripts/common.sh@354 -- # echo 2 00:08:01.436 10:45:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:01.436 10:45:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:01.436 10:45:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:01.436 10:45:50 -- scripts/common.sh@367 -- # return 0 00:08:01.436 10:45:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.436 10:45:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:01.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.436 --rc genhtml_branch_coverage=1 00:08:01.437 --rc genhtml_function_coverage=1 00:08:01.437 --rc genhtml_legend=1 00:08:01.437 --rc geninfo_all_blocks=1 00:08:01.437 --rc geninfo_unexecuted_blocks=1 00:08:01.437 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:01.437 ' 00:08:01.437 10:45:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:01.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.437 --rc genhtml_branch_coverage=1 00:08:01.437 --rc genhtml_function_coverage=1 00:08:01.437 --rc genhtml_legend=1 00:08:01.437 --rc geninfo_all_blocks=1 00:08:01.437 --rc geninfo_unexecuted_blocks=1 00:08:01.437 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:01.437 ' 00:08:01.437 10:45:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:01.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.437 --rc genhtml_branch_coverage=1 00:08:01.437 --rc genhtml_function_coverage=1 00:08:01.437 --rc genhtml_legend=1 00:08:01.437 --rc geninfo_all_blocks=1 00:08:01.437 --rc geninfo_unexecuted_blocks=1 00:08:01.437 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:01.437 ' 00:08:01.437 10:45:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:01.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.437 --rc genhtml_branch_coverage=1 00:08:01.437 --rc genhtml_function_coverage=1 00:08:01.437 --rc genhtml_legend=1 00:08:01.437 --rc geninfo_all_blocks=1 00:08:01.437 --rc geninfo_unexecuted_blocks=1 00:08:01.437 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:01.437 ' 00:08:01.437 10:45:50 -- vfio/run.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:01.437 10:45:50 -- ../common.sh@8 -- # pids=() 00:08:01.437 10:45:50 -- vfio/run.sh@58 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:01.437 10:45:50 -- vfio/run.sh@59 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:01.437 10:45:50 -- vfio/run.sh@59 -- # fuzz_num=7 00:08:01.437 10:45:50 -- vfio/run.sh@60 -- # (( fuzz_num != 0 )) 00:08:01.437 10:45:50 -- vfio/run.sh@62 -- # trap 'cleanup /tmp/vfio-user-*; exit 1' SIGINT SIGTERM EXIT 00:08:01.437 10:45:50 -- vfio/run.sh@65 -- # mem_size=0 00:08:01.437 10:45:50 -- vfio/run.sh@66 -- # [[ 1 -eq 1 ]] 00:08:01.437 10:45:50 -- vfio/run.sh@67 -- # start_llvm_fuzz_short 7 1 00:08:01.437 10:45:50 -- ../common.sh@69 -- # local fuzz_num=7 00:08:01.437 10:45:50 -- ../common.sh@70 -- # local time=1 00:08:01.437 10:45:50 -- ../common.sh@72 -- # (( i = 0 )) 00:08:01.437 10:45:50 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:01.437 10:45:50 -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:01.437 10:45:50 -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:01.437 10:45:50 -- vfio/run.sh@23 -- # local timen=1 00:08:01.437 10:45:50 -- vfio/run.sh@24 -- # local core=0x1 00:08:01.437 10:45:50 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:01.437 10:45:50 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:01.437 10:45:50 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:01.437 10:45:50 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:01.437 10:45:50 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:01.437 10:45:50 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:01.437 10:45:50 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:01.437 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:01.437 10:45:50 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:01.437 [2024-12-15 10:45:50.311072] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:01.437 [2024-12-15 10:45:50.311161] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1318364 ] 00:08:01.437 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.437 [2024-12-15 10:45:50.385148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.696 [2024-12-15 10:45:50.455698] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:01.696 [2024-12-15 10:45:50.455842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.696 INFO: Running with entropic power schedule (0xFF, 100). 00:08:01.696 INFO: Seed: 1066392035 00:08:01.696 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:01.696 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:01.696 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:01.696 INFO: A corpus is not provided, starting from an empty corpus 00:08:01.696 #2 INITED exec/s: 0 rss: 62Mb 00:08:01.696 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:01.696 This may also happen if the target rejected all inputs we tried so far 00:08:02.214 NEW_FUNC[1/631]: 0x43a218 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:85 00:08:02.214 NEW_FUNC[2/631]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:02.214 #4 NEW cov: 10741 ft: 10526 corp: 2/23b lim: 60 exec/s: 0 rss: 68Mb L: 22/22 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:02.214 #5 NEW cov: 10772 ft: 13342 corp: 3/46b lim: 60 exec/s: 0 rss: 69Mb L: 23/23 MS: 1 InsertByte- 00:08:02.473 #8 NEW cov: 10772 ft: 14356 corp: 4/100b lim: 60 exec/s: 0 rss: 70Mb L: 54/54 MS: 3 CopyPart-CopyPart-InsertRepeatedBytes- 00:08:02.473 #9 NEW cov: 10772 ft: 14809 corp: 5/135b lim: 60 exec/s: 0 rss: 70Mb L: 35/54 MS: 1 InsertRepeatedBytes- 00:08:02.732 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:02.732 #10 NEW cov: 10789 ft: 15345 corp: 6/170b lim: 60 exec/s: 0 rss: 70Mb L: 35/54 MS: 1 ChangeBinInt- 00:08:02.732 #11 NEW cov: 10792 ft: 15781 corp: 7/205b lim: 60 exec/s: 11 rss: 70Mb L: 35/54 MS: 1 ShuffleBytes- 00:08:02.991 #17 NEW cov: 10792 ft: 15932 corp: 8/248b lim: 60 exec/s: 17 rss: 70Mb L: 43/54 MS: 1 InsertRepeatedBytes- 00:08:02.991 #18 NEW cov: 10792 ft: 15957 corp: 9/302b lim: 60 exec/s: 18 rss: 70Mb L: 54/54 MS: 1 CopyPart- 00:08:03.250 #19 NEW cov: 10792 ft: 16565 corp: 10/324b lim: 60 exec/s: 19 rss: 70Mb L: 22/54 MS: 1 ShuffleBytes- 00:08:03.509 #20 NEW cov: 10792 ft: 16874 corp: 11/346b lim: 60 exec/s: 20 rss: 70Mb L: 22/54 MS: 1 ChangeBit- 00:08:03.509 #21 NEW cov: 10799 ft: 17368 corp: 12/368b lim: 60 exec/s: 21 rss: 70Mb L: 22/54 MS: 1 CopyPart- 00:08:03.769 #22 NEW cov: 10799 ft: 17394 corp: 13/425b lim: 60 exec/s: 11 rss: 70Mb L: 57/57 MS: 1 CopyPart- 00:08:03.769 #22 DONE cov: 10799 ft: 17394 corp: 13/425b lim: 60 exec/s: 11 rss: 70Mb 00:08:03.769 Done 22 runs in 2 second(s) 00:08:04.028 10:45:52 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-0 00:08:04.028 10:45:52 -- ../common.sh@72 -- # (( i++ )) 00:08:04.028 10:45:52 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:04.028 10:45:52 -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:04.028 10:45:52 -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:04.028 10:45:52 -- vfio/run.sh@23 -- # local timen=1 00:08:04.028 10:45:52 -- vfio/run.sh@24 -- # local core=0x1 00:08:04.028 10:45:52 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:04.028 10:45:52 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:04.028 10:45:52 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:04.028 10:45:52 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:04.028 10:45:52 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:04.028 10:45:52 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:04.028 10:45:52 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:04.028 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:04.028 10:45:52 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:04.028 [2024-12-15 10:45:53.017932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:04.028 [2024-12-15 10:45:53.018003] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1318712 ] 00:08:04.378 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.378 [2024-12-15 10:45:53.091375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.378 [2024-12-15 10:45:53.159231] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:04.378 [2024-12-15 10:45:53.159385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.378 INFO: Running with entropic power schedule (0xFF, 100). 00:08:04.378 INFO: Seed: 3772411808 00:08:04.637 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:04.637 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:04.637 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:04.637 INFO: A corpus is not provided, starting from an empty corpus 00:08:04.637 #2 INITED exec/s: 0 rss: 62Mb 00:08:04.637 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:04.637 This may also happen if the target rejected all inputs we tried so far 00:08:04.637 [2024-12-15 10:45:53.426461] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:04.637 [2024-12-15 10:45:53.426497] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:04.637 [2024-12-15 10:45:53.426515] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:04.896 NEW_FUNC[1/636]: 0x43a7b8 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:72 00:08:04.896 NEW_FUNC[2/636]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:04.896 #3 NEW cov: 10771 ft: 10737 corp: 2/13b lim: 40 exec/s: 0 rss: 67Mb L: 12/12 MS: 1 InsertRepeatedBytes- 00:08:04.896 [2024-12-15 10:45:53.839109] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:04.896 [2024-12-15 10:45:53.839149] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:04.896 [2024-12-15 10:45:53.839168] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:05.155 NEW_FUNC[1/2]: 0x440a48 in io_poller /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:394 00:08:05.155 NEW_FUNC[2/2]: 0x13556f8 in nvmf_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/./nvmf_internal.h:470 00:08:05.155 #4 NEW cov: 10796 ft: 13276 corp: 3/49b lim: 40 exec/s: 0 rss: 68Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:08:05.155 [2024-12-15 10:45:53.963809] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:05.155 [2024-12-15 10:45:53.963834] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:05.155 [2024-12-15 10:45:53.963852] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:05.155 #5 NEW cov: 10796 ft: 14375 corp: 4/85b lim: 40 exec/s: 0 rss: 69Mb L: 36/36 MS: 1 ChangeBinInt- 00:08:05.155 [2024-12-15 10:45:54.086553] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:05.155 [2024-12-15 10:45:54.086589] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:05.155 [2024-12-15 10:45:54.086607] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:05.155 #6 NEW cov: 10796 ft: 14572 corp: 5/121b lim: 40 exec/s: 0 rss: 70Mb L: 36/36 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\177"- 00:08:05.414 [2024-12-15 10:45:54.200399] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:05.414 [2024-12-15 10:45:54.200430] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:05.414 [2024-12-15 10:45:54.200450] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:05.414 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:05.414 #11 NEW cov: 10813 ft: 15151 corp: 6/126b lim: 40 exec/s: 0 rss: 70Mb L: 5/36 MS: 5 ChangeByte-InsertByte-ShuffleBytes-EraseBytes-CMP- DE: "K\000\000\000"- 00:08:05.414 [2024-12-15 10:45:54.324378] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:05.414 [2024-12-15 10:45:54.324402] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:05.414 [2024-12-15 10:45:54.324425] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:05.414 #12 NEW cov: 10813 ft: 15761 corp: 7/146b lim: 40 exec/s: 12 rss: 70Mb L: 20/36 MS: 1 EraseBytes- 00:08:05.674 [2024-12-15 10:45:54.438184] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:05.674 [2024-12-15 10:45:54.438211] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:05.674 [2024-12-15 10:45:54.438229] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:05.674 #13 NEW cov: 10813 ft: 16298 corp: 8/151b lim: 40 exec/s: 13 rss: 70Mb L: 5/36 MS: 1 ChangeByte- 00:08:05.674 [2024-12-15 10:45:54.552198] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:05.674 [2024-12-15 10:45:54.552223] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:05.674 [2024-12-15 10:45:54.552242] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:05.674 #14 NEW cov: 10813 ft: 16487 corp: 9/156b lim: 40 exec/s: 14 rss: 70Mb L: 5/36 MS: 1 CMP- DE: "\000\000\000\200"- 00:08:05.674 [2024-12-15 10:45:54.666058] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:05.674 [2024-12-15 10:45:54.666083] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:05.674 [2024-12-15 10:45:54.666100] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:05.932 #15 NEW cov: 10813 ft: 16637 corp: 10/168b lim: 40 exec/s: 15 rss: 70Mb L: 12/36 MS: 1 ChangeBinInt- 00:08:05.932 [2024-12-15 10:45:54.790956] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:05.932 [2024-12-15 10:45:54.790982] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:05.932 [2024-12-15 10:45:54.791000] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:05.933 #16 NEW cov: 10813 ft: 16768 corp: 11/189b lim: 40 exec/s: 16 rss: 70Mb L: 21/36 MS: 1 CopyPart- 00:08:05.933 [2024-12-15 10:45:54.905732] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:05.933 [2024-12-15 10:45:54.905758] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:05.933 [2024-12-15 10:45:54.905776] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:06.192 #17 NEW cov: 10813 ft: 16833 corp: 12/201b lim: 40 exec/s: 17 rss: 70Mb L: 12/36 MS: 1 ChangeByte- 00:08:06.192 [2024-12-15 10:45:55.019612] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:06.192 [2024-12-15 10:45:55.019637] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:06.192 [2024-12-15 10:45:55.019655] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:06.192 #18 NEW cov: 10813 ft: 16965 corp: 13/220b lim: 40 exec/s: 18 rss: 70Mb L: 19/36 MS: 1 CrossOver- 00:08:06.192 [2024-12-15 10:45:55.132418] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:06.192 [2024-12-15 10:45:55.132442] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:06.192 [2024-12-15 10:45:55.132460] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:06.192 #19 NEW cov: 10820 ft: 17004 corp: 14/225b lim: 40 exec/s: 19 rss: 70Mb L: 5/36 MS: 1 CrossOver- 00:08:06.450 [2024-12-15 10:45:55.245422] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:06.450 [2024-12-15 10:45:55.245446] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:06.450 [2024-12-15 10:45:55.245464] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:06.450 #20 NEW cov: 10820 ft: 17152 corp: 15/249b lim: 40 exec/s: 20 rss: 70Mb L: 24/36 MS: 1 CrossOver- 00:08:06.450 [2024-12-15 10:45:55.359434] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:06.450 [2024-12-15 10:45:55.359459] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:06.450 [2024-12-15 10:45:55.359477] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:06.450 #21 NEW cov: 10820 ft: 17551 corp: 16/254b lim: 40 exec/s: 10 rss: 70Mb L: 5/36 MS: 1 PersAutoDict- DE: "\000\000\000\200"- 00:08:06.450 #21 DONE cov: 10820 ft: 17551 corp: 16/254b lim: 40 exec/s: 10 rss: 70Mb 00:08:06.450 ###### Recommended dictionary. ###### 00:08:06.450 "\377\377\377\377\377\377\377\177" # Uses: 0 00:08:06.450 "K\000\000\000" # Uses: 0 00:08:06.450 "\000\000\000\200" # Uses: 1 00:08:06.450 ###### End of recommended dictionary. ###### 00:08:06.450 Done 21 runs in 2 second(s) 00:08:06.709 10:45:55 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-1 00:08:06.709 10:45:55 -- ../common.sh@72 -- # (( i++ )) 00:08:06.709 10:45:55 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:06.709 10:45:55 -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:06.709 10:45:55 -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:06.709 10:45:55 -- vfio/run.sh@23 -- # local timen=1 00:08:06.709 10:45:55 -- vfio/run.sh@24 -- # local core=0x1 00:08:06.709 10:45:55 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:06.709 10:45:55 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:06.709 10:45:55 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:06.709 10:45:55 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:06.709 10:45:55 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:06.709 10:45:55 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:06.709 10:45:55 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:06.709 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:06.709 10:45:55 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:06.969 [2024-12-15 10:45:55.740104] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:06.969 [2024-12-15 10:45:55.740173] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1319208 ] 00:08:06.969 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.969 [2024-12-15 10:45:55.813632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.969 [2024-12-15 10:45:55.879968] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:06.969 [2024-12-15 10:45:55.880109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.228 INFO: Running with entropic power schedule (0xFF, 100). 00:08:07.228 INFO: Seed: 2199448396 00:08:07.228 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:07.228 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:07.228 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:07.228 INFO: A corpus is not provided, starting from an empty corpus 00:08:07.228 #2 INITED exec/s: 0 rss: 62Mb 00:08:07.228 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:07.228 This may also happen if the target rejected all inputs we tried so far 00:08:07.228 [2024-12-15 10:45:56.198244] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:07.746 NEW_FUNC[1/635]: 0x43b1a8 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:104 00:08:07.746 NEW_FUNC[2/635]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:07.746 #11 NEW cov: 10755 ft: 10725 corp: 2/65b lim: 80 exec/s: 0 rss: 67Mb L: 64/64 MS: 4 ChangeBinInt-InsertByte-ChangeByte-InsertRepeatedBytes- 00:08:07.746 [2024-12-15 10:45:56.687517] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:08.005 NEW_FUNC[1/1]: 0x167a438 in nvme_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1090 00:08:08.005 #13 NEW cov: 10772 ft: 12935 corp: 3/117b lim: 80 exec/s: 0 rss: 69Mb L: 52/64 MS: 2 ShuffleBytes-CrossOver- 00:08:08.005 [2024-12-15 10:45:56.891178] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:08.005 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:08.005 #14 NEW cov: 10789 ft: 13799 corp: 4/133b lim: 80 exec/s: 0 rss: 70Mb L: 16/64 MS: 1 CrossOver- 00:08:08.263 [2024-12-15 10:45:57.082376] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:08.263 #20 NEW cov: 10789 ft: 15668 corp: 5/198b lim: 80 exec/s: 20 rss: 70Mb L: 65/65 MS: 1 InsertByte- 00:08:08.263 [2024-12-15 10:45:57.271990] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:08.522 #21 NEW cov: 10792 ft: 16152 corp: 6/250b lim: 80 exec/s: 21 rss: 70Mb L: 52/65 MS: 1 ChangeBit- 00:08:08.522 [2024-12-15 10:45:57.468117] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:08.780 #22 NEW cov: 10792 ft: 16386 corp: 7/315b lim: 80 exec/s: 22 rss: 70Mb L: 65/65 MS: 1 CMP- DE: "\005\000\000\000"- 00:08:08.780 [2024-12-15 10:45:57.660595] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:08.780 #23 NEW cov: 10792 ft: 16738 corp: 8/371b lim: 80 exec/s: 23 rss: 70Mb L: 56/65 MS: 1 PersAutoDict- DE: "\005\000\000\000"- 00:08:09.038 [2024-12-15 10:45:57.859173] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:09.039 #24 NEW cov: 10799 ft: 17052 corp: 9/423b lim: 80 exec/s: 24 rss: 70Mb L: 52/65 MS: 1 CrossOver- 00:08:09.297 [2024-12-15 10:45:58.054613] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:09.297 #25 NEW cov: 10799 ft: 17119 corp: 10/502b lim: 80 exec/s: 12 rss: 70Mb L: 79/79 MS: 1 InsertRepeatedBytes- 00:08:09.297 #25 DONE cov: 10799 ft: 17119 corp: 10/502b lim: 80 exec/s: 12 rss: 70Mb 00:08:09.297 ###### Recommended dictionary. ###### 00:08:09.297 "\005\000\000\000" # Uses: 1 00:08:09.297 ###### End of recommended dictionary. ###### 00:08:09.297 Done 25 runs in 2 second(s) 00:08:09.556 10:45:58 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-2 00:08:09.556 10:45:58 -- ../common.sh@72 -- # (( i++ )) 00:08:09.556 10:45:58 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:09.556 10:45:58 -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:09.556 10:45:58 -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:09.556 10:45:58 -- vfio/run.sh@23 -- # local timen=1 00:08:09.556 10:45:58 -- vfio/run.sh@24 -- # local core=0x1 00:08:09.556 10:45:58 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:09.556 10:45:58 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:09.556 10:45:58 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:09.556 10:45:58 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:09.556 10:45:58 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:09.556 10:45:58 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:09.556 10:45:58 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:09.556 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:09.556 10:45:58 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:09.556 [2024-12-15 10:45:58.471549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:09.556 [2024-12-15 10:45:58.471634] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1319746 ] 00:08:09.556 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.556 [2024-12-15 10:45:58.544574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.815 [2024-12-15 10:45:58.612411] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:09.816 [2024-12-15 10:45:58.612559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.816 INFO: Running with entropic power schedule (0xFF, 100). 00:08:09.816 INFO: Seed: 631464873 00:08:09.816 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:09.816 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:09.816 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:09.816 INFO: A corpus is not provided, starting from an empty corpus 00:08:09.816 #2 INITED exec/s: 0 rss: 62Mb 00:08:09.816 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:09.816 This may also happen if the target rejected all inputs we tried so far 00:08:10.334 NEW_FUNC[1/632]: 0x43b898 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:125 00:08:10.334 NEW_FUNC[2/632]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:10.334 #6 NEW cov: 10745 ft: 10632 corp: 2/96b lim: 320 exec/s: 0 rss: 68Mb L: 95/95 MS: 4 ChangeBinInt-ChangeBit-CrossOver-InsertRepeatedBytes- 00:08:10.593 #7 NEW cov: 10759 ft: 13068 corp: 3/191b lim: 320 exec/s: 0 rss: 69Mb L: 95/95 MS: 1 ChangeBit- 00:08:10.852 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:10.852 #8 NEW cov: 10776 ft: 14442 corp: 4/287b lim: 320 exec/s: 0 rss: 70Mb L: 96/96 MS: 1 InsertByte- 00:08:11.111 #9 NEW cov: 10779 ft: 14853 corp: 5/383b lim: 320 exec/s: 9 rss: 70Mb L: 96/96 MS: 1 InsertByte- 00:08:11.111 #10 NEW cov: 10779 ft: 15392 corp: 6/443b lim: 320 exec/s: 10 rss: 70Mb L: 60/96 MS: 1 EraseBytes- 00:08:11.370 #11 NEW cov: 10779 ft: 15773 corp: 7/538b lim: 320 exec/s: 11 rss: 70Mb L: 95/96 MS: 1 ShuffleBytes- 00:08:11.628 #12 NEW cov: 10779 ft: 16376 corp: 8/633b lim: 320 exec/s: 12 rss: 70Mb L: 95/96 MS: 1 ChangeBinInt- 00:08:11.628 #13 NEW cov: 10786 ft: 16436 corp: 9/728b lim: 320 exec/s: 13 rss: 70Mb L: 95/96 MS: 1 CMP- DE: "\377\377\377\004"- 00:08:11.888 #14 NEW cov: 10786 ft: 16453 corp: 10/828b lim: 320 exec/s: 7 rss: 70Mb L: 100/100 MS: 1 PersAutoDict- DE: "\377\377\377\004"- 00:08:11.888 #14 DONE cov: 10786 ft: 16453 corp: 10/828b lim: 320 exec/s: 7 rss: 70Mb 00:08:11.888 ###### Recommended dictionary. ###### 00:08:11.888 "\377\377\377\004" # Uses: 1 00:08:11.888 ###### End of recommended dictionary. ###### 00:08:11.888 Done 14 runs in 2 second(s) 00:08:12.147 10:46:01 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-3 00:08:12.147 10:46:01 -- ../common.sh@72 -- # (( i++ )) 00:08:12.147 10:46:01 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:12.147 10:46:01 -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:12.147 10:46:01 -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:12.147 10:46:01 -- vfio/run.sh@23 -- # local timen=1 00:08:12.147 10:46:01 -- vfio/run.sh@24 -- # local core=0x1 00:08:12.147 10:46:01 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:12.147 10:46:01 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:12.147 10:46:01 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:12.147 10:46:01 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:12.147 10:46:01 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:12.147 10:46:01 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:12.147 10:46:01 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:12.147 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:12.147 10:46:01 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:12.147 [2024-12-15 10:46:01.134640] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:12.147 [2024-12-15 10:46:01.134712] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1320276 ] 00:08:12.407 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.407 [2024-12-15 10:46:01.206361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.407 [2024-12-15 10:46:01.273123] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:12.407 [2024-12-15 10:46:01.273267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.666 INFO: Running with entropic power schedule (0xFF, 100). 00:08:12.666 INFO: Seed: 3292465588 00:08:12.666 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:12.666 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:12.666 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:12.666 INFO: A corpus is not provided, starting from an empty corpus 00:08:12.666 #2 INITED exec/s: 0 rss: 62Mb 00:08:12.666 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:12.666 This may also happen if the target rejected all inputs we tried so far 00:08:12.925 NEW_FUNC[1/632]: 0x43c118 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:145 00:08:12.925 NEW_FUNC[2/632]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:12.925 #6 NEW cov: 10750 ft: 10718 corp: 2/44b lim: 320 exec/s: 0 rss: 67Mb L: 43/43 MS: 4 ChangeByte-CrossOver-ChangeByte-InsertRepeatedBytes- 00:08:13.183 #7 NEW cov: 10764 ft: 13871 corp: 3/87b lim: 320 exec/s: 0 rss: 69Mb L: 43/43 MS: 1 ChangeBit- 00:08:13.442 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:13.442 #9 NEW cov: 10781 ft: 14591 corp: 4/184b lim: 320 exec/s: 0 rss: 70Mb L: 97/97 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:13.701 #10 NEW cov: 10781 ft: 14660 corp: 5/331b lim: 320 exec/s: 10 rss: 70Mb L: 147/147 MS: 1 InsertRepeatedBytes- 00:08:13.701 #11 NEW cov: 10781 ft: 15543 corp: 6/374b lim: 320 exec/s: 11 rss: 70Mb L: 43/147 MS: 1 CopyPart- 00:08:13.961 #12 NEW cov: 10781 ft: 15976 corp: 7/417b lim: 320 exec/s: 12 rss: 70Mb L: 43/147 MS: 1 ChangeBit- 00:08:14.220 #13 NEW cov: 10781 ft: 16180 corp: 8/666b lim: 320 exec/s: 13 rss: 70Mb L: 249/249 MS: 1 InsertRepeatedBytes- 00:08:14.480 #14 NEW cov: 10781 ft: 16227 corp: 9/701b lim: 320 exec/s: 14 rss: 70Mb L: 35/249 MS: 1 EraseBytes- 00:08:14.480 #15 NEW cov: 10788 ft: 16595 corp: 10/950b lim: 320 exec/s: 15 rss: 70Mb L: 249/249 MS: 1 CrossOver- 00:08:14.739 #16 pulse cov: 10788 ft: 16688 corp: 10/950b lim: 320 exec/s: 8 rss: 70Mb 00:08:14.739 #16 NEW cov: 10788 ft: 16688 corp: 11/1199b lim: 320 exec/s: 8 rss: 70Mb L: 249/249 MS: 1 ChangeBinInt- 00:08:14.739 #16 DONE cov: 10788 ft: 16688 corp: 11/1199b lim: 320 exec/s: 8 rss: 70Mb 00:08:14.739 Done 16 runs in 2 second(s) 00:08:14.998 10:46:03 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-4 00:08:14.998 10:46:03 -- ../common.sh@72 -- # (( i++ )) 00:08:14.998 10:46:03 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:14.998 10:46:03 -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:14.998 10:46:03 -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:14.998 10:46:03 -- vfio/run.sh@23 -- # local timen=1 00:08:14.998 10:46:03 -- vfio/run.sh@24 -- # local core=0x1 00:08:14.998 10:46:03 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:14.998 10:46:03 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:14.998 10:46:03 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:14.998 10:46:03 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:14.998 10:46:03 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:14.998 10:46:03 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:14.998 10:46:03 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:14.998 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:14.998 10:46:03 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:14.998 [2024-12-15 10:46:03.949155] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:14.998 [2024-12-15 10:46:03.949226] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1320697 ] 00:08:14.998 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.257 [2024-12-15 10:46:04.022264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.257 [2024-12-15 10:46:04.090675] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:15.257 [2024-12-15 10:46:04.090820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.517 INFO: Running with entropic power schedule (0xFF, 100). 00:08:15.517 INFO: Seed: 1820504950 00:08:15.517 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:15.517 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:15.517 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:15.517 INFO: A corpus is not provided, starting from an empty corpus 00:08:15.517 #2 INITED exec/s: 0 rss: 62Mb 00:08:15.517 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:15.517 This may also happen if the target rejected all inputs we tried so far 00:08:15.517 [2024-12-15 10:46:04.398450] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:15.517 [2024-12-15 10:46:04.398496] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:15.776 NEW_FUNC[1/638]: 0x43cb18 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:172 00:08:15.776 NEW_FUNC[2/638]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:15.776 #3 NEW cov: 10777 ft: 10746 corp: 2/92b lim: 120 exec/s: 0 rss: 68Mb L: 91/91 MS: 1 InsertRepeatedBytes- 00:08:16.035 [2024-12-15 10:46:04.861152] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:16.035 [2024-12-15 10:46:04.861196] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:16.035 #9 NEW cov: 10794 ft: 14210 corp: 3/184b lim: 120 exec/s: 0 rss: 69Mb L: 92/92 MS: 1 InsertByte- 00:08:16.293 [2024-12-15 10:46:05.064519] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:16.293 [2024-12-15 10:46:05.064549] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:16.293 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:16.293 #10 NEW cov: 10811 ft: 14322 corp: 4/276b lim: 120 exec/s: 0 rss: 70Mb L: 92/92 MS: 1 ShuffleBytes- 00:08:16.293 [2024-12-15 10:46:05.265575] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:16.293 [2024-12-15 10:46:05.265607] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:16.551 #11 NEW cov: 10811 ft: 14707 corp: 5/368b lim: 120 exec/s: 11 rss: 70Mb L: 92/92 MS: 1 ChangeBinInt- 00:08:16.551 [2024-12-15 10:46:05.468493] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:16.551 [2024-12-15 10:46:05.468525] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:16.810 #12 NEW cov: 10811 ft: 15430 corp: 6/422b lim: 120 exec/s: 12 rss: 70Mb L: 54/92 MS: 1 EraseBytes- 00:08:16.810 [2024-12-15 10:46:05.671648] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:16.810 [2024-12-15 10:46:05.671679] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:16.810 #13 NEW cov: 10811 ft: 15510 corp: 7/514b lim: 120 exec/s: 13 rss: 70Mb L: 92/92 MS: 1 ShuffleBytes- 00:08:17.070 [2024-12-15 10:46:05.866010] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:17.070 [2024-12-15 10:46:05.866045] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:17.070 #19 NEW cov: 10811 ft: 15568 corp: 8/622b lim: 120 exec/s: 19 rss: 70Mb L: 108/108 MS: 1 CrossOver- 00:08:17.070 [2024-12-15 10:46:06.056451] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:17.070 [2024-12-15 10:46:06.056482] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:17.330 #20 NEW cov: 10818 ft: 15622 corp: 9/714b lim: 120 exec/s: 20 rss: 70Mb L: 92/108 MS: 1 InsertByte- 00:08:17.330 [2024-12-15 10:46:06.247333] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:17.330 [2024-12-15 10:46:06.247363] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:17.589 #21 NEW cov: 10818 ft: 16277 corp: 10/803b lim: 120 exec/s: 10 rss: 70Mb L: 89/108 MS: 1 InsertRepeatedBytes- 00:08:17.589 #21 DONE cov: 10818 ft: 16277 corp: 10/803b lim: 120 exec/s: 10 rss: 70Mb 00:08:17.589 Done 21 runs in 2 second(s) 00:08:17.848 10:46:06 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-5 00:08:17.848 10:46:06 -- ../common.sh@72 -- # (( i++ )) 00:08:17.848 10:46:06 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:17.848 10:46:06 -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:17.848 10:46:06 -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:17.848 10:46:06 -- vfio/run.sh@23 -- # local timen=1 00:08:17.848 10:46:06 -- vfio/run.sh@24 -- # local core=0x1 00:08:17.848 10:46:06 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:17.848 10:46:06 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:17.848 10:46:06 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:17.848 10:46:06 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:17.848 10:46:06 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:17.848 10:46:06 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:17.848 10:46:06 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:17.848 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:17.848 10:46:06 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:17.848 [2024-12-15 10:46:06.681352] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:17.848 [2024-12-15 10:46:06.681430] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1321130 ] 00:08:17.848 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.848 [2024-12-15 10:46:06.754616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.848 [2024-12-15 10:46:06.821943] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:17.848 [2024-12-15 10:46:06.822086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.107 INFO: Running with entropic power schedule (0xFF, 100). 00:08:18.107 INFO: Seed: 256549214 00:08:18.107 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:18.107 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:18.107 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:18.107 INFO: A corpus is not provided, starting from an empty corpus 00:08:18.107 #2 INITED exec/s: 0 rss: 62Mb 00:08:18.107 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:18.107 This may also happen if the target rejected all inputs we tried so far 00:08:18.366 [2024-12-15 10:46:07.129579] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:18.366 [2024-12-15 10:46:07.129625] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:18.625 NEW_FUNC[1/638]: 0x43d808 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:18.625 NEW_FUNC[2/638]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:18.625 #11 NEW cov: 10776 ft: 10421 corp: 2/78b lim: 90 exec/s: 0 rss: 68Mb L: 77/77 MS: 4 ChangeByte-InsertByte-ChangeBit-InsertRepeatedBytes- 00:08:18.625 [2024-12-15 10:46:07.582509] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:18.625 [2024-12-15 10:46:07.582550] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:18.884 #17 NEW cov: 10790 ft: 13910 corp: 3/87b lim: 90 exec/s: 0 rss: 69Mb L: 9/77 MS: 1 CMP- DE: "\017\000\000\000\000\000\000\000"- 00:08:18.884 [2024-12-15 10:46:07.776966] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:18.884 [2024-12-15 10:46:07.776997] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:18.884 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:18.884 #18 NEW cov: 10807 ft: 15024 corp: 4/164b lim: 90 exec/s: 0 rss: 70Mb L: 77/77 MS: 1 ChangeByte- 00:08:19.143 [2024-12-15 10:46:07.959194] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:19.143 [2024-12-15 10:46:07.959222] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:19.143 #19 NEW cov: 10807 ft: 15497 corp: 5/173b lim: 90 exec/s: 19 rss: 70Mb L: 9/77 MS: 1 ChangeByte- 00:08:19.143 [2024-12-15 10:46:08.143436] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:19.143 [2024-12-15 10:46:08.143465] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:19.402 #20 NEW cov: 10807 ft: 15631 corp: 6/256b lim: 90 exec/s: 20 rss: 70Mb L: 83/83 MS: 1 InsertRepeatedBytes- 00:08:19.402 [2024-12-15 10:46:08.325643] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:19.402 [2024-12-15 10:46:08.325674] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:19.660 #21 NEW cov: 10807 ft: 16372 corp: 7/340b lim: 90 exec/s: 21 rss: 70Mb L: 84/84 MS: 1 InsertByte- 00:08:19.660 [2024-12-15 10:46:08.512243] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:19.660 [2024-12-15 10:46:08.512276] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:19.660 #22 NEW cov: 10807 ft: 16799 corp: 8/418b lim: 90 exec/s: 22 rss: 70Mb L: 78/84 MS: 1 InsertByte- 00:08:19.919 [2024-12-15 10:46:08.699121] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:19.919 [2024-12-15 10:46:08.699151] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:19.919 #23 NEW cov: 10807 ft: 17012 corp: 9/501b lim: 90 exec/s: 23 rss: 70Mb L: 83/84 MS: 1 ChangeBinInt- 00:08:19.919 [2024-12-15 10:46:08.883198] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:19.919 [2024-12-15 10:46:08.883229] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:20.178 #24 NEW cov: 10814 ft: 17090 corp: 10/584b lim: 90 exec/s: 24 rss: 70Mb L: 83/84 MS: 1 CrossOver- 00:08:20.178 [2024-12-15 10:46:09.068543] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:20.178 [2024-12-15 10:46:09.068572] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:20.178 #25 NEW cov: 10814 ft: 17176 corp: 11/667b lim: 90 exec/s: 12 rss: 70Mb L: 83/84 MS: 1 PersAutoDict- DE: "\017\000\000\000\000\000\000\000"- 00:08:20.178 #25 DONE cov: 10814 ft: 17176 corp: 11/667b lim: 90 exec/s: 12 rss: 70Mb 00:08:20.178 ###### Recommended dictionary. ###### 00:08:20.178 "\017\000\000\000\000\000\000\000" # Uses: 1 00:08:20.178 ###### End of recommended dictionary. ###### 00:08:20.178 Done 25 runs in 2 second(s) 00:08:20.437 10:46:09 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-6 00:08:20.437 10:46:09 -- ../common.sh@72 -- # (( i++ )) 00:08:20.437 10:46:09 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:20.437 10:46:09 -- vfio/run.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:08:20.437 00:08:20.437 real 0m19.609s 00:08:20.437 user 0m27.236s 00:08:20.437 sys 0m1.859s 00:08:20.437 10:46:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.437 10:46:09 -- common/autotest_common.sh@10 -- # set +x 00:08:20.437 ************************************ 00:08:20.437 END TEST vfio_fuzz 00:08:20.437 ************************************ 00:08:20.696 00:08:20.696 real 1m25.186s 00:08:20.696 user 2m8.223s 00:08:20.696 sys 0m10.015s 00:08:20.696 10:46:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.696 10:46:09 -- common/autotest_common.sh@10 -- # set +x 00:08:20.696 ************************************ 00:08:20.696 END TEST llvm_fuzz 00:08:20.696 ************************************ 00:08:20.696 10:46:09 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:08:20.696 10:46:09 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:08:20.696 10:46:09 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:08:20.696 10:46:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:20.696 10:46:09 -- common/autotest_common.sh@10 -- # set +x 00:08:20.696 10:46:09 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:08:20.696 10:46:09 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:08:20.696 10:46:09 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:08:20.696 10:46:09 -- common/autotest_common.sh@10 -- # set +x 00:08:27.263 INFO: APP EXITING 00:08:27.263 INFO: killing all VMs 00:08:27.263 INFO: killing vhost app 00:08:27.263 INFO: EXIT DONE 00:08:30.554 Waiting for block devices as requested 00:08:30.554 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:30.554 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:30.554 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:30.554 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:30.554 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:30.554 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:30.814 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:30.814 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:30.814 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:30.814 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:31.073 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:31.073 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:31.073 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:31.332 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:31.332 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:31.332 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:31.592 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:34.881 Cleaning 00:08:34.881 Removing: /dev/shm/spdk_tgt_trace.pid1282908 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1280409 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1281693 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1282908 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1283707 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1284027 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1284364 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1284708 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1285047 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1285332 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1285616 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1285946 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1286812 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1290010 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1290323 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1290636 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1290887 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1291457 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1291522 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1292038 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1292254 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1292538 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1292624 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1292920 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1292980 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1293569 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1293854 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1294073 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1294218 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1294518 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1294554 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1294853 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1295023 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1295228 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1295435 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1295717 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1295988 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1296271 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1296537 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1296835 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1297102 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1297385 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1297559 00:08:34.881 Removing: /var/run/dpdk/spdk_pid1297761 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1297967 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1298256 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1298524 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1298805 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1299074 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1299362 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1299586 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1299784 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1299956 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1300222 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1300490 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1300782 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1301048 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1301335 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1301533 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1301737 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1301919 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1302204 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1302470 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1302754 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1303026 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1303321 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1303550 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1303769 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1303932 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1304185 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1304457 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1304746 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1304934 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1305157 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1305915 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1306371 00:08:35.140 Removing: /var/run/dpdk/spdk_pid1306748 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1307285 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1307698 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1308119 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1308677 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1309216 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1309888 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1310595 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1311138 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1311529 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1311976 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1312519 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1313041 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1313386 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1313892 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1314439 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1314935 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1315277 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1315820 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1316359 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1316676 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1317195 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1317729 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1318364 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1318712 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1319208 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1319746 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1320276 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1320697 00:08:35.141 Removing: /var/run/dpdk/spdk_pid1321130 00:08:35.141 Clean 00:08:35.399 killing process with pid 1233045 00:08:39.590 killing process with pid 1233041 00:08:39.591 killing process with pid 1233043 00:08:39.591 killing process with pid 1233042 00:08:39.591 10:46:28 -- common/autotest_common.sh@1446 -- # return 0 00:08:39.591 10:46:28 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:08:39.591 10:46:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:39.591 10:46:28 -- common/autotest_common.sh@10 -- # set +x 00:08:39.591 10:46:28 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:08:39.591 10:46:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:39.591 10:46:28 -- common/autotest_common.sh@10 -- # set +x 00:08:39.591 10:46:28 -- spdk/autotest.sh@377 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:39.591 10:46:28 -- spdk/autotest.sh@379 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:39.591 10:46:28 -- spdk/autotest.sh@379 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:39.591 10:46:28 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:08:39.591 10:46:28 -- spdk/autotest.sh@383 -- # hostname 00:08:39.591 10:46:28 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:08:39.591 geninfo: WARNING: invalid characters removed from testname! 00:08:40.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcda 00:08:40.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcda 00:08:40.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcda 00:08:52.515 10:46:39 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:57.787 10:46:46 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:01.979 10:46:50 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:07.256 10:46:55 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:11.449 10:47:00 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:16.723 10:47:04 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:20.915 10:47:09 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:09:20.915 10:47:09 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:09:20.915 10:47:09 -- common/autotest_common.sh@1690 -- $ lcov --version 00:09:20.915 10:47:09 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:09:20.915 10:47:09 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:09:20.915 10:47:09 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:09:20.915 10:47:09 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:09:20.915 10:47:09 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:09:20.915 10:47:09 -- scripts/common.sh@335 -- $ IFS=.-: 00:09:20.915 10:47:09 -- scripts/common.sh@335 -- $ read -ra ver1 00:09:20.915 10:47:09 -- scripts/common.sh@336 -- $ IFS=.-: 00:09:20.915 10:47:09 -- scripts/common.sh@336 -- $ read -ra ver2 00:09:20.915 10:47:09 -- scripts/common.sh@337 -- $ local 'op=<' 00:09:20.915 10:47:09 -- scripts/common.sh@339 -- $ ver1_l=2 00:09:20.915 10:47:09 -- scripts/common.sh@340 -- $ ver2_l=1 00:09:20.915 10:47:09 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:09:20.915 10:47:09 -- scripts/common.sh@343 -- $ case "$op" in 00:09:20.915 10:47:09 -- scripts/common.sh@344 -- $ : 1 00:09:20.915 10:47:09 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:09:20.915 10:47:09 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.915 10:47:09 -- scripts/common.sh@364 -- $ decimal 1 00:09:20.915 10:47:09 -- scripts/common.sh@352 -- $ local d=1 00:09:20.915 10:47:09 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:09:20.915 10:47:09 -- scripts/common.sh@354 -- $ echo 1 00:09:20.915 10:47:09 -- scripts/common.sh@364 -- $ ver1[v]=1 00:09:20.915 10:47:09 -- scripts/common.sh@365 -- $ decimal 2 00:09:20.915 10:47:09 -- scripts/common.sh@352 -- $ local d=2 00:09:20.915 10:47:09 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:09:20.915 10:47:09 -- scripts/common.sh@354 -- $ echo 2 00:09:20.915 10:47:09 -- scripts/common.sh@365 -- $ ver2[v]=2 00:09:20.915 10:47:09 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:09:20.915 10:47:09 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:09:20.915 10:47:09 -- scripts/common.sh@367 -- $ return 0 00:09:20.915 10:47:09 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.915 10:47:09 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:09:20.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.915 --rc genhtml_branch_coverage=1 00:09:20.915 --rc genhtml_function_coverage=1 00:09:20.915 --rc genhtml_legend=1 00:09:20.915 --rc geninfo_all_blocks=1 00:09:20.915 --rc geninfo_unexecuted_blocks=1 00:09:20.915 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:20.915 ' 00:09:20.915 10:47:09 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:09:20.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.916 --rc genhtml_branch_coverage=1 00:09:20.916 --rc genhtml_function_coverage=1 00:09:20.916 --rc genhtml_legend=1 00:09:20.916 --rc geninfo_all_blocks=1 00:09:20.916 --rc geninfo_unexecuted_blocks=1 00:09:20.916 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:20.916 ' 00:09:20.916 10:47:09 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:09:20.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.916 --rc genhtml_branch_coverage=1 00:09:20.916 --rc genhtml_function_coverage=1 00:09:20.916 --rc genhtml_legend=1 00:09:20.916 --rc geninfo_all_blocks=1 00:09:20.916 --rc geninfo_unexecuted_blocks=1 00:09:20.916 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:20.916 ' 00:09:20.916 10:47:09 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:09:20.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.916 --rc genhtml_branch_coverage=1 00:09:20.916 --rc genhtml_function_coverage=1 00:09:20.916 --rc genhtml_legend=1 00:09:20.916 --rc geninfo_all_blocks=1 00:09:20.916 --rc geninfo_unexecuted_blocks=1 00:09:20.916 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:20.916 ' 00:09:20.916 10:47:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:20.916 10:47:09 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:09:20.916 10:47:09 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.916 10:47:09 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.916 10:47:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.916 10:47:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.916 10:47:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.916 10:47:09 -- paths/export.sh@5 -- $ export PATH 00:09:20.916 10:47:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.916 10:47:09 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:09:20.916 10:47:09 -- common/autobuild_common.sh@440 -- $ date +%s 00:09:20.916 10:47:09 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734256029.XXXXXX 00:09:20.916 10:47:09 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734256029.EUceGM 00:09:20.916 10:47:09 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:09:20.916 10:47:09 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:09:20.916 10:47:09 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:09:20.916 10:47:09 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:09:20.916 10:47:09 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:09:20.916 10:47:09 -- common/autobuild_common.sh@456 -- $ get_config_params 00:09:20.916 10:47:09 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:09:20.916 10:47:09 -- common/autotest_common.sh@10 -- $ set +x 00:09:20.916 10:47:09 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:09:20.916 10:47:09 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:09:20.916 10:47:09 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:20.916 10:47:09 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:09:20.916 10:47:09 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:09:20.916 10:47:09 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:09:20.916 10:47:09 -- spdk/autopackage.sh@19 -- $ timing_finish 00:09:20.916 10:47:09 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:09:20.916 10:47:09 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:09:20.916 10:47:09 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:20.916 10:47:09 -- spdk/autopackage.sh@20 -- $ exit 0 00:09:20.916 + [[ -n 1189106 ]] 00:09:20.916 + sudo kill 1189106 00:09:20.926 [Pipeline] } 00:09:20.943 [Pipeline] // stage 00:09:20.948 [Pipeline] } 00:09:20.964 [Pipeline] // timeout 00:09:20.969 [Pipeline] } 00:09:20.983 [Pipeline] // catchError 00:09:20.988 [Pipeline] } 00:09:21.003 [Pipeline] // wrap 00:09:21.009 [Pipeline] } 00:09:21.022 [Pipeline] // catchError 00:09:21.032 [Pipeline] stage 00:09:21.034 [Pipeline] { (Epilogue) 00:09:21.048 [Pipeline] catchError 00:09:21.050 [Pipeline] { 00:09:21.064 [Pipeline] echo 00:09:21.066 Cleanup processes 00:09:21.072 [Pipeline] sh 00:09:21.359 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:21.359 1330647 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:21.373 [Pipeline] sh 00:09:21.660 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:21.660 ++ grep -v 'sudo pgrep' 00:09:21.660 ++ awk '{print $1}' 00:09:21.660 + sudo kill -9 00:09:21.660 + true 00:09:21.671 [Pipeline] sh 00:09:21.955 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:09:21.956 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:21.956 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:23.335 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:33.327 [Pipeline] sh 00:09:33.612 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:33.613 Artifacts sizes are good 00:09:33.627 [Pipeline] archiveArtifacts 00:09:33.634 Archiving artifacts 00:09:33.795 [Pipeline] sh 00:09:34.133 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:34.148 [Pipeline] cleanWs 00:09:34.158 [WS-CLEANUP] Deleting project workspace... 00:09:34.158 [WS-CLEANUP] Deferred wipeout is used... 00:09:34.164 [WS-CLEANUP] done 00:09:34.165 [Pipeline] } 00:09:34.178 [Pipeline] // catchError 00:09:34.188 [Pipeline] sh 00:09:34.471 + logger -p user.info -t JENKINS-CI 00:09:34.480 [Pipeline] } 00:09:34.494 [Pipeline] // stage 00:09:34.499 [Pipeline] } 00:09:34.513 [Pipeline] // node 00:09:34.518 [Pipeline] End of Pipeline 00:09:34.580 Finished: SUCCESS