00:00:00.006 Started by upstream project "autotest-per-patch" build number 132804 00:00:00.006 originally caused by: 00:00:00.006 Started by user sys_sgci 00:00:00.081 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.084 The recommended git tool is: git 00:00:00.085 using credential 00000000-0000-0000-0000-000000000002 00:00:00.086 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.124 Fetching changes from the remote Git repository 00:00:00.126 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.156 Using shallow fetch with depth 1 00:00:00.156 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.156 > git --version # timeout=10 00:00:00.183 > git --version # 'git version 2.39.2' 00:00:00.183 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.205 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.205 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.780 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.792 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.804 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.804 > git config core.sparsecheckout # timeout=10 00:00:04.818 > git read-tree -mu HEAD # timeout=10 00:00:04.837 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.859 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.859 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.950 [Pipeline] Start of Pipeline 00:00:04.962 [Pipeline] library 00:00:04.963 Loading library shm_lib@master 00:00:04.963 Library shm_lib@master is cached. Copying from home. 00:00:04.978 [Pipeline] node 00:00:04.990 Running on WFP49 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:04.992 [Pipeline] { 00:00:05.003 [Pipeline] catchError 00:00:05.005 [Pipeline] { 00:00:05.017 [Pipeline] wrap 00:00:05.026 [Pipeline] { 00:00:05.033 [Pipeline] stage 00:00:05.035 [Pipeline] { (Prologue) 00:00:05.380 [Pipeline] sh 00:00:05.666 + logger -p user.info -t JENKINS-CI 00:00:05.697 [Pipeline] echo 00:00:05.698 Node: WFP49 00:00:05.703 [Pipeline] sh 00:00:06.016 [Pipeline] setCustomBuildProperty 00:00:06.023 [Pipeline] echo 00:00:06.024 Cleanup processes 00:00:06.026 [Pipeline] sh 00:00:06.314 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.314 792215 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.332 [Pipeline] sh 00:00:06.623 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.623 ++ grep -v 'sudo pgrep' 00:00:06.623 ++ awk '{print $1}' 00:00:06.623 + sudo kill -9 00:00:06.623 + true 00:00:06.637 [Pipeline] cleanWs 00:00:06.646 [WS-CLEANUP] Deleting project workspace... 00:00:06.646 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.653 [WS-CLEANUP] done 00:00:06.656 [Pipeline] setCustomBuildProperty 00:00:06.667 [Pipeline] sh 00:00:06.950 + sudo git config --global --replace-all safe.directory '*' 00:00:07.092 [Pipeline] httpRequest 00:00:07.562 [Pipeline] echo 00:00:07.564 Sorcerer 10.211.164.112 is alive 00:00:07.573 [Pipeline] retry 00:00:07.575 [Pipeline] { 00:00:07.585 [Pipeline] httpRequest 00:00:07.589 HttpMethod: GET 00:00:07.590 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.590 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.612 Response Code: HTTP/1.1 200 OK 00:00:07.613 Success: Status code 200 is in the accepted range: 200,404 00:00:07.613 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.254 [Pipeline] } 00:00:24.267 [Pipeline] // retry 00:00:24.275 [Pipeline] sh 00:00:24.563 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.579 [Pipeline] httpRequest 00:00:25.092 [Pipeline] echo 00:00:25.094 Sorcerer 10.211.164.112 is alive 00:00:25.103 [Pipeline] retry 00:00:25.105 [Pipeline] { 00:00:25.118 [Pipeline] httpRequest 00:00:25.122 HttpMethod: GET 00:00:25.123 URL: http://10.211.164.112/packages/spdk_b8248e28c89c09106c84e7622ffae26b1edceaab.tar.gz 00:00:25.124 Sending request to url: http://10.211.164.112/packages/spdk_b8248e28c89c09106c84e7622ffae26b1edceaab.tar.gz 00:00:25.130 Response Code: HTTP/1.1 200 OK 00:00:25.130 Success: Status code 200 is in the accepted range: 200,404 00:00:25.131 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_b8248e28c89c09106c84e7622ffae26b1edceaab.tar.gz 00:02:09.800 [Pipeline] } 00:02:09.820 [Pipeline] // retry 00:02:09.827 [Pipeline] sh 00:02:10.113 + tar --no-same-owner -xf spdk_b8248e28c89c09106c84e7622ffae26b1edceaab.tar.gz 00:02:12.665 [Pipeline] sh 00:02:12.950 + git -C spdk log --oneline -n5 00:02:12.950 b8248e28c test/check_so_deps: use VERSION to look for prior tags 00:02:12.950 805149865 build: use VERSION file for storing version 00:02:12.950 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:02:12.950 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:02:12.950 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:02:12.962 [Pipeline] } 00:02:12.975 [Pipeline] // stage 00:02:12.983 [Pipeline] stage 00:02:12.984 [Pipeline] { (Prepare) 00:02:12.998 [Pipeline] writeFile 00:02:13.012 [Pipeline] sh 00:02:13.297 + logger -p user.info -t JENKINS-CI 00:02:13.310 [Pipeline] sh 00:02:13.598 + logger -p user.info -t JENKINS-CI 00:02:13.609 [Pipeline] sh 00:02:13.893 + cat autorun-spdk.conf 00:02:13.893 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.893 SPDK_TEST_FUZZER_SHORT=1 00:02:13.893 SPDK_TEST_FUZZER=1 00:02:13.893 SPDK_TEST_SETUP=1 00:02:13.893 SPDK_RUN_UBSAN=1 00:02:13.900 RUN_NIGHTLY=0 00:02:13.905 [Pipeline] readFile 00:02:13.926 [Pipeline] withEnv 00:02:13.928 [Pipeline] { 00:02:13.942 [Pipeline] sh 00:02:14.233 + set -ex 00:02:14.233 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:02:14.233 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:02:14.233 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.233 ++ SPDK_TEST_FUZZER_SHORT=1 00:02:14.233 ++ SPDK_TEST_FUZZER=1 00:02:14.233 ++ SPDK_TEST_SETUP=1 00:02:14.233 ++ SPDK_RUN_UBSAN=1 00:02:14.233 ++ RUN_NIGHTLY=0 00:02:14.233 + case $SPDK_TEST_NVMF_NICS in 00:02:14.233 + DRIVERS= 00:02:14.233 + [[ -n '' ]] 00:02:14.233 + exit 0 00:02:14.242 [Pipeline] } 00:02:14.262 [Pipeline] // withEnv 00:02:14.267 [Pipeline] } 00:02:14.276 [Pipeline] // stage 00:02:14.283 [Pipeline] catchError 00:02:14.285 [Pipeline] { 00:02:14.296 [Pipeline] timeout 00:02:14.296 Timeout set to expire in 30 min 00:02:14.297 [Pipeline] { 00:02:14.307 [Pipeline] stage 00:02:14.308 [Pipeline] { (Tests) 00:02:14.319 [Pipeline] sh 00:02:14.606 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:14.606 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:14.606 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:02:14.606 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:02:14.606 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:14.606 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:02:14.606 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:02:14.606 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:02:14.606 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:02:14.606 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:02:14.606 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:02:14.606 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:14.606 + source /etc/os-release 00:02:14.606 ++ NAME='Fedora Linux' 00:02:14.606 ++ VERSION='39 (Cloud Edition)' 00:02:14.606 ++ ID=fedora 00:02:14.606 ++ VERSION_ID=39 00:02:14.606 ++ VERSION_CODENAME= 00:02:14.606 ++ PLATFORM_ID=platform:f39 00:02:14.606 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:14.606 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:14.606 ++ LOGO=fedora-logo-icon 00:02:14.606 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:14.606 ++ HOME_URL=https://fedoraproject.org/ 00:02:14.606 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:14.606 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:14.606 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:14.606 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:14.606 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:14.606 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:14.606 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:14.606 ++ SUPPORT_END=2024-11-12 00:02:14.606 ++ VARIANT='Cloud Edition' 00:02:14.606 ++ VARIANT_ID=cloud 00:02:14.606 + uname -a 00:02:14.606 Linux spdk-wfp-49 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:14.606 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:17.145 Hugepages 00:02:17.145 node hugesize free / total 00:02:17.145 node0 1048576kB 0 / 0 00:02:17.145 node0 2048kB 0 / 0 00:02:17.145 node1 1048576kB 0 / 0 00:02:17.145 node1 2048kB 0 / 0 00:02:17.145 00:02:17.145 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:17.145 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:17.145 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:17.145 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:17.145 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:17.145 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:17.145 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:17.145 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:17.145 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:17.145 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:17.145 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:17.145 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:17.145 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:17.145 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:17.145 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:17.145 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:17.145 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:17.145 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:17.145 + rm -f /tmp/spdk-ld-path 00:02:17.145 + source autorun-spdk.conf 00:02:17.145 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:17.145 ++ SPDK_TEST_FUZZER_SHORT=1 00:02:17.145 ++ SPDK_TEST_FUZZER=1 00:02:17.145 ++ SPDK_TEST_SETUP=1 00:02:17.145 ++ SPDK_RUN_UBSAN=1 00:02:17.145 ++ RUN_NIGHTLY=0 00:02:17.145 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:17.145 + [[ -n '' ]] 00:02:17.145 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:17.146 + for M in /var/spdk/build-*-manifest.txt 00:02:17.146 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:17.146 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:02:17.146 + for M in /var/spdk/build-*-manifest.txt 00:02:17.146 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:17.146 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:02:17.146 + for M in /var/spdk/build-*-manifest.txt 00:02:17.146 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:17.146 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:02:17.146 ++ uname 00:02:17.146 + [[ Linux == \L\i\n\u\x ]] 00:02:17.146 + sudo dmesg -T 00:02:17.146 + sudo dmesg --clear 00:02:17.146 + dmesg_pid=793479 00:02:17.146 + [[ Fedora Linux == FreeBSD ]] 00:02:17.146 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:17.146 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:17.146 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:17.146 + [[ -x /usr/src/fio-static/fio ]] 00:02:17.146 + export FIO_BIN=/usr/src/fio-static/fio 00:02:17.146 + FIO_BIN=/usr/src/fio-static/fio 00:02:17.146 + sudo dmesg -Tw 00:02:17.146 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:17.146 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:17.146 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:17.146 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:17.146 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:17.146 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:17.146 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:17.146 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:17.146 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:02:17.406 15:35:12 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:17.406 15:35:12 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:02:17.406 15:35:12 -- short-fuzz-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:17.406 15:35:12 -- short-fuzz-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_FUZZER_SHORT=1 00:02:17.406 15:35:12 -- short-fuzz-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_FUZZER=1 00:02:17.406 15:35:12 -- short-fuzz-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_SETUP=1 00:02:17.406 15:35:12 -- short-fuzz-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:02:17.406 15:35:12 -- short-fuzz-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:02:17.406 15:35:12 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:17.406 15:35:12 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:02:17.406 15:35:12 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:17.406 15:35:12 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:17.406 15:35:12 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:17.406 15:35:12 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:17.406 15:35:12 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:17.406 15:35:12 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:17.406 15:35:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.406 15:35:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.406 15:35:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.406 15:35:12 -- paths/export.sh@5 -- $ export PATH 00:02:17.406 15:35:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.406 15:35:12 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:17.406 15:35:12 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:17.406 Traceback (most recent call last): 00:02:17.406 File "/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py", line 24, in 00:02:17.406 import spdk.rpc as rpc # noqa 00:02:17.406 ^^^^^^^^^^^^^^^^^^^^^^ 00:02:17.406 File "/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python/spdk/__init__.py", line 5, in 00:02:17.406 from .version import __version__ 00:02:17.406 ModuleNotFoundError: No module named 'spdk.version' 00:02:17.406 15:35:12 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733754912.XXXXXX 00:02:17.406 15:35:12 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733754912.XQMgww 00:02:17.406 15:35:12 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:17.406 15:35:12 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:17.406 15:35:12 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:02:17.406 15:35:12 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:17.406 15:35:12 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:17.406 15:35:12 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:17.406 15:35:12 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:17.406 15:35:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:17.406 15:35:12 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:17.406 15:35:12 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:17.406 15:35:12 -- pm/common@17 -- $ local monitor 00:02:17.406 15:35:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.406 15:35:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.406 15:35:12 -- pm/common@21 -- $ date +%s 00:02:17.406 15:35:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.406 15:35:12 -- pm/common@21 -- $ date +%s 00:02:17.406 15:35:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.406 15:35:12 -- pm/common@25 -- $ sleep 1 00:02:17.406 15:35:12 -- pm/common@21 -- $ date +%s 00:02:17.406 15:35:12 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733754912 00:02:17.406 15:35:12 -- pm/common@21 -- $ date +%s 00:02:17.406 15:35:12 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733754912 00:02:17.406 15:35:12 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733754912 00:02:17.406 15:35:12 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733754912 00:02:17.407 Traceback (most recent call last): 00:02:17.407 File "/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py", line 24, in 00:02:17.407 import spdk.rpc as rpc # noqa 00:02:17.407 ^^^^^^^^^^^^^^^^^^^^^^ 00:02:17.407 File "/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python/spdk/__init__.py", line 5, in 00:02:17.407 from .version import __version__ 00:02:17.407 ModuleNotFoundError: No module named 'spdk.version' 00:02:17.407 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733754912_collect-cpu-load.pm.log 00:02:17.407 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733754912_collect-cpu-temp.pm.log 00:02:17.407 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733754912_collect-vmstat.pm.log 00:02:17.407 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733754912_collect-bmc-pm.bmc.pm.log 00:02:18.343 15:35:13 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:18.343 15:35:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:18.343 15:35:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:18.343 15:35:13 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:18.343 15:35:13 -- spdk/autobuild.sh@16 -- $ date -u 00:02:18.343 Mon Dec 9 02:35:13 PM UTC 2024 00:02:18.343 15:35:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:18.343 v25.01-pre-305-gb8248e28c 00:02:18.343 15:35:13 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:18.343 15:35:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:18.343 15:35:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:18.343 15:35:13 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:18.343 15:35:13 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:18.343 15:35:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.603 ************************************ 00:02:18.603 START TEST ubsan 00:02:18.603 ************************************ 00:02:18.603 15:35:13 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:18.603 using ubsan 00:02:18.603 00:02:18.603 real 0m0.001s 00:02:18.603 user 0m0.000s 00:02:18.603 sys 0m0.000s 00:02:18.603 15:35:13 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:18.603 15:35:13 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:18.603 ************************************ 00:02:18.603 END TEST ubsan 00:02:18.603 ************************************ 00:02:18.603 15:35:13 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:18.603 15:35:13 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:18.603 15:35:13 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:18.603 15:35:13 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:02:18.603 15:35:13 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:02:18.603 15:35:13 -- common/autobuild_common.sh@445 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:02:18.603 15:35:13 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:18.603 15:35:13 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:18.603 15:35:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.603 ************************************ 00:02:18.603 START TEST autobuild_llvm_precompile 00:02:18.603 ************************************ 00:02:18.603 15:35:13 autobuild_llvm_precompile -- common/autotest_common.sh@1129 -- $ _llvm_precompile 00:02:18.603 15:35:13 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:02:18.603 15:35:13 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:02:18.603 Target: x86_64-redhat-linux-gnu 00:02:18.603 Thread model: posix 00:02:18.603 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:02:18.603 15:35:13 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:02:18.603 15:35:13 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:02:18.603 15:35:13 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:02:18.603 15:35:13 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:02:18.603 15:35:13 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:02:18.603 15:35:13 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:02:18.603 15:35:13 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:02:18.603 15:35:13 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:02:18.603 15:35:13 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:02:18.603 15:35:13 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:02:18.879 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:02:18.879 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:19.139 Using 'verbs' RDMA provider 00:02:34.975 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:47.203 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:47.773 Creating mk/config.mk...done. 00:02:47.773 Creating mk/cc.flags.mk...done. 00:02:47.773 Type 'make' to build. 00:02:47.773 00:02:47.773 real 0m29.084s 00:02:47.773 user 0m12.964s 00:02:47.773 sys 0m15.435s 00:02:47.773 15:35:42 autobuild_llvm_precompile -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:47.773 15:35:42 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:02:47.773 ************************************ 00:02:47.773 END TEST autobuild_llvm_precompile 00:02:47.773 ************************************ 00:02:47.773 15:35:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:47.773 15:35:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:47.773 15:35:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:47.773 15:35:42 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:02:47.773 15:35:42 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:02:48.033 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:02:48.033 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:48.293 Using 'verbs' RDMA provider 00:03:01.459 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:11.447 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:12.278 Creating mk/config.mk...done. 00:03:12.278 Creating mk/cc.flags.mk...done. 00:03:12.278 Type 'make' to build. 00:03:12.278 15:36:07 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:03:12.278 15:36:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:12.278 15:36:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:12.278 15:36:07 -- common/autotest_common.sh@10 -- $ set +x 00:03:12.278 ************************************ 00:03:12.278 START TEST make 00:03:12.278 ************************************ 00:03:12.278 15:36:07 make -- common/autotest_common.sh@1129 -- $ make -j72 00:03:14.194 The Meson build system 00:03:14.194 Version: 1.5.0 00:03:14.194 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:03:14.195 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:14.195 Build type: native build 00:03:14.195 Project name: libvfio-user 00:03:14.195 Project version: 0.0.1 00:03:14.195 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:03:14.195 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:03:14.195 Host machine cpu family: x86_64 00:03:14.195 Host machine cpu: x86_64 00:03:14.195 Run-time dependency threads found: YES 00:03:14.195 Library dl found: YES 00:03:14.195 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:14.195 Run-time dependency json-c found: YES 0.17 00:03:14.195 Run-time dependency cmocka found: YES 1.1.7 00:03:14.195 Program pytest-3 found: NO 00:03:14.195 Program flake8 found: NO 00:03:14.195 Program misspell-fixer found: NO 00:03:14.195 Program restructuredtext-lint found: NO 00:03:14.195 Program valgrind found: YES (/usr/bin/valgrind) 00:03:14.195 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:14.195 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:14.195 Compiler for C supports arguments -Wwrite-strings: YES 00:03:14.195 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:14.195 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:14.195 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:14.195 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:14.195 Build targets in project: 8 00:03:14.195 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:14.195 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:14.195 00:03:14.195 libvfio-user 0.0.1 00:03:14.195 00:03:14.195 User defined options 00:03:14.195 buildtype : debug 00:03:14.195 default_library: static 00:03:14.195 libdir : /usr/local/lib 00:03:14.195 00:03:14.195 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:14.762 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:14.762 [1/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:14.762 [2/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:03:14.762 [3/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:14.762 [4/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:03:14.762 [5/36] Compiling C object samples/null.p/null.c.o 00:03:14.762 [6/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:03:14.762 [7/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:03:14.762 [8/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:14.762 [9/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:14.762 [10/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:14.762 [11/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:14.762 [12/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:14.762 [13/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:03:14.762 [14/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:14.762 [15/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:03:14.762 [16/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:03:14.762 [17/36] Compiling C object test/unit_tests.p/mocks.c.o 00:03:14.762 [18/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:14.762 [19/36] Compiling C object samples/server.p/server.c.o 00:03:14.762 [20/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:14.762 [21/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:14.762 [22/36] Compiling C object samples/lspci.p/lspci.c.o 00:03:14.762 [23/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:14.762 [24/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:14.762 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:14.762 [26/36] Compiling C object samples/client.p/client.c.o 00:03:14.762 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:03:14.762 [28/36] Linking static target lib/libvfio-user.a 00:03:14.762 [29/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:15.021 [30/36] Linking target samples/client 00:03:15.021 [31/36] Linking target samples/gpio-pci-idio-16 00:03:15.021 [32/36] Linking target samples/server 00:03:15.021 [33/36] Linking target samples/shadow_ioeventfd_server 00:03:15.021 [34/36] Linking target samples/lspci 00:03:15.021 [35/36] Linking target test/unit_tests 00:03:15.021 [36/36] Linking target samples/null 00:03:15.021 INFO: autodetecting backend as ninja 00:03:15.021 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:15.021 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:15.280 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:15.280 ninja: no work to do. 00:03:20.554 The Meson build system 00:03:20.554 Version: 1.5.0 00:03:20.554 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:03:20.554 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:03:20.554 Build type: native build 00:03:20.554 Program cat found: YES (/usr/bin/cat) 00:03:20.554 Project name: DPDK 00:03:20.554 Project version: 24.03.0 00:03:20.554 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:03:20.554 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:03:20.554 Host machine cpu family: x86_64 00:03:20.554 Host machine cpu: x86_64 00:03:20.554 Message: ## Building in Developer Mode ## 00:03:20.554 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:20.554 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:20.554 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:20.554 Program python3 found: YES (/usr/bin/python3) 00:03:20.554 Program cat found: YES (/usr/bin/cat) 00:03:20.554 Compiler for C supports arguments -march=native: YES 00:03:20.554 Checking for size of "void *" : 8 00:03:20.554 Checking for size of "void *" : 8 (cached) 00:03:20.554 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:20.554 Library m found: YES 00:03:20.554 Library numa found: YES 00:03:20.554 Has header "numaif.h" : YES 00:03:20.554 Library fdt found: NO 00:03:20.555 Library execinfo found: NO 00:03:20.555 Has header "execinfo.h" : YES 00:03:20.555 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:20.555 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:20.555 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:20.555 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:20.555 Run-time dependency openssl found: YES 3.1.1 00:03:20.555 Run-time dependency libpcap found: YES 1.10.4 00:03:20.555 Has header "pcap.h" with dependency libpcap: YES 00:03:20.555 Compiler for C supports arguments -Wcast-qual: YES 00:03:20.555 Compiler for C supports arguments -Wdeprecated: YES 00:03:20.555 Compiler for C supports arguments -Wformat: YES 00:03:20.555 Compiler for C supports arguments -Wformat-nonliteral: YES 00:03:20.555 Compiler for C supports arguments -Wformat-security: YES 00:03:20.555 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:20.555 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:20.555 Compiler for C supports arguments -Wnested-externs: YES 00:03:20.555 Compiler for C supports arguments -Wold-style-definition: YES 00:03:20.555 Compiler for C supports arguments -Wpointer-arith: YES 00:03:20.555 Compiler for C supports arguments -Wsign-compare: YES 00:03:20.555 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:20.555 Compiler for C supports arguments -Wundef: YES 00:03:20.555 Compiler for C supports arguments -Wwrite-strings: YES 00:03:20.555 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:20.555 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:03:20.555 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:20.555 Program objdump found: YES (/usr/bin/objdump) 00:03:20.555 Compiler for C supports arguments -mavx512f: YES 00:03:20.555 Checking if "AVX512 checking" compiles: YES 00:03:20.555 Fetching value of define "__SSE4_2__" : 1 00:03:20.555 Fetching value of define "__AES__" : 1 00:03:20.555 Fetching value of define "__AVX__" : 1 00:03:20.555 Fetching value of define "__AVX2__" : 1 00:03:20.555 Fetching value of define "__AVX512BW__" : 1 00:03:20.555 Fetching value of define "__AVX512CD__" : 1 00:03:20.555 Fetching value of define "__AVX512DQ__" : 1 00:03:20.555 Fetching value of define "__AVX512F__" : 1 00:03:20.555 Fetching value of define "__AVX512VL__" : 1 00:03:20.555 Fetching value of define "__PCLMUL__" : 1 00:03:20.555 Fetching value of define "__RDRND__" : 1 00:03:20.555 Fetching value of define "__RDSEED__" : 1 00:03:20.555 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:20.555 Fetching value of define "__znver1__" : (undefined) 00:03:20.555 Fetching value of define "__znver2__" : (undefined) 00:03:20.555 Fetching value of define "__znver3__" : (undefined) 00:03:20.555 Fetching value of define "__znver4__" : (undefined) 00:03:20.555 Compiler for C supports arguments -Wno-format-truncation: NO 00:03:20.555 Message: lib/log: Defining dependency "log" 00:03:20.555 Message: lib/kvargs: Defining dependency "kvargs" 00:03:20.555 Message: lib/telemetry: Defining dependency "telemetry" 00:03:20.555 Checking for function "getentropy" : NO 00:03:20.555 Message: lib/eal: Defining dependency "eal" 00:03:20.555 Message: lib/ring: Defining dependency "ring" 00:03:20.555 Message: lib/rcu: Defining dependency "rcu" 00:03:20.555 Message: lib/mempool: Defining dependency "mempool" 00:03:20.555 Message: lib/mbuf: Defining dependency "mbuf" 00:03:20.555 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:20.555 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:20.555 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:20.555 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:20.555 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:20.555 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:20.555 Compiler for C supports arguments -mpclmul: YES 00:03:20.555 Compiler for C supports arguments -maes: YES 00:03:20.555 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:20.555 Compiler for C supports arguments -mavx512bw: YES 00:03:20.555 Compiler for C supports arguments -mavx512dq: YES 00:03:20.555 Compiler for C supports arguments -mavx512vl: YES 00:03:20.555 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:20.555 Compiler for C supports arguments -mavx2: YES 00:03:20.555 Compiler for C supports arguments -mavx: YES 00:03:20.555 Message: lib/net: Defining dependency "net" 00:03:20.555 Message: lib/meter: Defining dependency "meter" 00:03:20.555 Message: lib/ethdev: Defining dependency "ethdev" 00:03:20.555 Message: lib/pci: Defining dependency "pci" 00:03:20.555 Message: lib/cmdline: Defining dependency "cmdline" 00:03:20.555 Message: lib/hash: Defining dependency "hash" 00:03:20.555 Message: lib/timer: Defining dependency "timer" 00:03:20.555 Message: lib/compressdev: Defining dependency "compressdev" 00:03:20.555 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:20.555 Message: lib/dmadev: Defining dependency "dmadev" 00:03:20.555 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:20.555 Message: lib/power: Defining dependency "power" 00:03:20.555 Message: lib/reorder: Defining dependency "reorder" 00:03:20.555 Message: lib/security: Defining dependency "security" 00:03:20.555 Has header "linux/userfaultfd.h" : YES 00:03:20.555 Has header "linux/vduse.h" : YES 00:03:20.555 Message: lib/vhost: Defining dependency "vhost" 00:03:20.555 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:03:20.555 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:20.555 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:20.555 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:20.555 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:20.555 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:20.555 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:20.555 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:20.555 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:20.555 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:20.555 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:20.555 Configuring doxy-api-html.conf using configuration 00:03:20.555 Configuring doxy-api-man.conf using configuration 00:03:20.555 Program mandb found: YES (/usr/bin/mandb) 00:03:20.555 Program sphinx-build found: NO 00:03:20.555 Configuring rte_build_config.h using configuration 00:03:20.555 Message: 00:03:20.555 ================= 00:03:20.555 Applications Enabled 00:03:20.555 ================= 00:03:20.555 00:03:20.555 apps: 00:03:20.555 00:03:20.556 00:03:20.556 Message: 00:03:20.556 ================= 00:03:20.556 Libraries Enabled 00:03:20.556 ================= 00:03:20.556 00:03:20.556 libs: 00:03:20.556 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:20.556 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:20.556 cryptodev, dmadev, power, reorder, security, vhost, 00:03:20.556 00:03:20.556 Message: 00:03:20.556 =============== 00:03:20.556 Drivers Enabled 00:03:20.556 =============== 00:03:20.556 00:03:20.556 common: 00:03:20.556 00:03:20.556 bus: 00:03:20.556 pci, vdev, 00:03:20.556 mempool: 00:03:20.556 ring, 00:03:20.556 dma: 00:03:20.556 00:03:20.556 net: 00:03:20.556 00:03:20.556 crypto: 00:03:20.556 00:03:20.556 compress: 00:03:20.556 00:03:20.556 vdpa: 00:03:20.556 00:03:20.556 00:03:20.556 Message: 00:03:20.556 ================= 00:03:20.556 Content Skipped 00:03:20.556 ================= 00:03:20.556 00:03:20.556 apps: 00:03:20.556 dumpcap: explicitly disabled via build config 00:03:20.556 graph: explicitly disabled via build config 00:03:20.556 pdump: explicitly disabled via build config 00:03:20.556 proc-info: explicitly disabled via build config 00:03:20.556 test-acl: explicitly disabled via build config 00:03:20.556 test-bbdev: explicitly disabled via build config 00:03:20.556 test-cmdline: explicitly disabled via build config 00:03:20.556 test-compress-perf: explicitly disabled via build config 00:03:20.556 test-crypto-perf: explicitly disabled via build config 00:03:20.556 test-dma-perf: explicitly disabled via build config 00:03:20.556 test-eventdev: explicitly disabled via build config 00:03:20.556 test-fib: explicitly disabled via build config 00:03:20.556 test-flow-perf: explicitly disabled via build config 00:03:20.556 test-gpudev: explicitly disabled via build config 00:03:20.556 test-mldev: explicitly disabled via build config 00:03:20.556 test-pipeline: explicitly disabled via build config 00:03:20.556 test-pmd: explicitly disabled via build config 00:03:20.556 test-regex: explicitly disabled via build config 00:03:20.556 test-sad: explicitly disabled via build config 00:03:20.556 test-security-perf: explicitly disabled via build config 00:03:20.556 00:03:20.556 libs: 00:03:20.556 argparse: explicitly disabled via build config 00:03:20.556 metrics: explicitly disabled via build config 00:03:20.556 acl: explicitly disabled via build config 00:03:20.556 bbdev: explicitly disabled via build config 00:03:20.556 bitratestats: explicitly disabled via build config 00:03:20.556 bpf: explicitly disabled via build config 00:03:20.556 cfgfile: explicitly disabled via build config 00:03:20.556 distributor: explicitly disabled via build config 00:03:20.556 efd: explicitly disabled via build config 00:03:20.556 eventdev: explicitly disabled via build config 00:03:20.556 dispatcher: explicitly disabled via build config 00:03:20.556 gpudev: explicitly disabled via build config 00:03:20.556 gro: explicitly disabled via build config 00:03:20.556 gso: explicitly disabled via build config 00:03:20.556 ip_frag: explicitly disabled via build config 00:03:20.556 jobstats: explicitly disabled via build config 00:03:20.556 latencystats: explicitly disabled via build config 00:03:20.556 lpm: explicitly disabled via build config 00:03:20.556 member: explicitly disabled via build config 00:03:20.556 pcapng: explicitly disabled via build config 00:03:20.556 rawdev: explicitly disabled via build config 00:03:20.556 regexdev: explicitly disabled via build config 00:03:20.556 mldev: explicitly disabled via build config 00:03:20.556 rib: explicitly disabled via build config 00:03:20.556 sched: explicitly disabled via build config 00:03:20.556 stack: explicitly disabled via build config 00:03:20.556 ipsec: explicitly disabled via build config 00:03:20.556 pdcp: explicitly disabled via build config 00:03:20.556 fib: explicitly disabled via build config 00:03:20.556 port: explicitly disabled via build config 00:03:20.556 pdump: explicitly disabled via build config 00:03:20.556 table: explicitly disabled via build config 00:03:20.556 pipeline: explicitly disabled via build config 00:03:20.556 graph: explicitly disabled via build config 00:03:20.556 node: explicitly disabled via build config 00:03:20.556 00:03:20.556 drivers: 00:03:20.556 common/cpt: not in enabled drivers build config 00:03:20.556 common/dpaax: not in enabled drivers build config 00:03:20.556 common/iavf: not in enabled drivers build config 00:03:20.556 common/idpf: not in enabled drivers build config 00:03:20.556 common/ionic: not in enabled drivers build config 00:03:20.556 common/mvep: not in enabled drivers build config 00:03:20.556 common/octeontx: not in enabled drivers build config 00:03:20.556 bus/auxiliary: not in enabled drivers build config 00:03:20.556 bus/cdx: not in enabled drivers build config 00:03:20.556 bus/dpaa: not in enabled drivers build config 00:03:20.556 bus/fslmc: not in enabled drivers build config 00:03:20.556 bus/ifpga: not in enabled drivers build config 00:03:20.556 bus/platform: not in enabled drivers build config 00:03:20.556 bus/uacce: not in enabled drivers build config 00:03:20.556 bus/vmbus: not in enabled drivers build config 00:03:20.556 common/cnxk: not in enabled drivers build config 00:03:20.556 common/mlx5: not in enabled drivers build config 00:03:20.556 common/nfp: not in enabled drivers build config 00:03:20.556 common/nitrox: not in enabled drivers build config 00:03:20.556 common/qat: not in enabled drivers build config 00:03:20.556 common/sfc_efx: not in enabled drivers build config 00:03:20.556 mempool/bucket: not in enabled drivers build config 00:03:20.556 mempool/cnxk: not in enabled drivers build config 00:03:20.556 mempool/dpaa: not in enabled drivers build config 00:03:20.556 mempool/dpaa2: not in enabled drivers build config 00:03:20.556 mempool/octeontx: not in enabled drivers build config 00:03:20.556 mempool/stack: not in enabled drivers build config 00:03:20.556 dma/cnxk: not in enabled drivers build config 00:03:20.556 dma/dpaa: not in enabled drivers build config 00:03:20.556 dma/dpaa2: not in enabled drivers build config 00:03:20.556 dma/hisilicon: not in enabled drivers build config 00:03:20.556 dma/idxd: not in enabled drivers build config 00:03:20.556 dma/ioat: not in enabled drivers build config 00:03:20.556 dma/skeleton: not in enabled drivers build config 00:03:20.557 net/af_packet: not in enabled drivers build config 00:03:20.557 net/af_xdp: not in enabled drivers build config 00:03:20.557 net/ark: not in enabled drivers build config 00:03:20.557 net/atlantic: not in enabled drivers build config 00:03:20.557 net/avp: not in enabled drivers build config 00:03:20.557 net/axgbe: not in enabled drivers build config 00:03:20.557 net/bnx2x: not in enabled drivers build config 00:03:20.557 net/bnxt: not in enabled drivers build config 00:03:20.557 net/bonding: not in enabled drivers build config 00:03:20.557 net/cnxk: not in enabled drivers build config 00:03:20.557 net/cpfl: not in enabled drivers build config 00:03:20.557 net/cxgbe: not in enabled drivers build config 00:03:20.557 net/dpaa: not in enabled drivers build config 00:03:20.557 net/dpaa2: not in enabled drivers build config 00:03:20.557 net/e1000: not in enabled drivers build config 00:03:20.557 net/ena: not in enabled drivers build config 00:03:20.557 net/enetc: not in enabled drivers build config 00:03:20.557 net/enetfec: not in enabled drivers build config 00:03:20.557 net/enic: not in enabled drivers build config 00:03:20.557 net/failsafe: not in enabled drivers build config 00:03:20.557 net/fm10k: not in enabled drivers build config 00:03:20.557 net/gve: not in enabled drivers build config 00:03:20.557 net/hinic: not in enabled drivers build config 00:03:20.557 net/hns3: not in enabled drivers build config 00:03:20.557 net/i40e: not in enabled drivers build config 00:03:20.557 net/iavf: not in enabled drivers build config 00:03:20.557 net/ice: not in enabled drivers build config 00:03:20.557 net/idpf: not in enabled drivers build config 00:03:20.557 net/igc: not in enabled drivers build config 00:03:20.557 net/ionic: not in enabled drivers build config 00:03:20.557 net/ipn3ke: not in enabled drivers build config 00:03:20.557 net/ixgbe: not in enabled drivers build config 00:03:20.557 net/mana: not in enabled drivers build config 00:03:20.557 net/memif: not in enabled drivers build config 00:03:20.557 net/mlx4: not in enabled drivers build config 00:03:20.557 net/mlx5: not in enabled drivers build config 00:03:20.557 net/mvneta: not in enabled drivers build config 00:03:20.557 net/mvpp2: not in enabled drivers build config 00:03:20.557 net/netvsc: not in enabled drivers build config 00:03:20.557 net/nfb: not in enabled drivers build config 00:03:20.557 net/nfp: not in enabled drivers build config 00:03:20.557 net/ngbe: not in enabled drivers build config 00:03:20.557 net/null: not in enabled drivers build config 00:03:20.557 net/octeontx: not in enabled drivers build config 00:03:20.557 net/octeon_ep: not in enabled drivers build config 00:03:20.557 net/pcap: not in enabled drivers build config 00:03:20.557 net/pfe: not in enabled drivers build config 00:03:20.557 net/qede: not in enabled drivers build config 00:03:20.557 net/ring: not in enabled drivers build config 00:03:20.557 net/sfc: not in enabled drivers build config 00:03:20.557 net/softnic: not in enabled drivers build config 00:03:20.557 net/tap: not in enabled drivers build config 00:03:20.557 net/thunderx: not in enabled drivers build config 00:03:20.557 net/txgbe: not in enabled drivers build config 00:03:20.557 net/vdev_netvsc: not in enabled drivers build config 00:03:20.557 net/vhost: not in enabled drivers build config 00:03:20.557 net/virtio: not in enabled drivers build config 00:03:20.557 net/vmxnet3: not in enabled drivers build config 00:03:20.557 raw/*: missing internal dependency, "rawdev" 00:03:20.557 crypto/armv8: not in enabled drivers build config 00:03:20.557 crypto/bcmfs: not in enabled drivers build config 00:03:20.557 crypto/caam_jr: not in enabled drivers build config 00:03:20.557 crypto/ccp: not in enabled drivers build config 00:03:20.557 crypto/cnxk: not in enabled drivers build config 00:03:20.557 crypto/dpaa_sec: not in enabled drivers build config 00:03:20.557 crypto/dpaa2_sec: not in enabled drivers build config 00:03:20.557 crypto/ipsec_mb: not in enabled drivers build config 00:03:20.557 crypto/mlx5: not in enabled drivers build config 00:03:20.557 crypto/mvsam: not in enabled drivers build config 00:03:20.557 crypto/nitrox: not in enabled drivers build config 00:03:20.557 crypto/null: not in enabled drivers build config 00:03:20.557 crypto/octeontx: not in enabled drivers build config 00:03:20.557 crypto/openssl: not in enabled drivers build config 00:03:20.557 crypto/scheduler: not in enabled drivers build config 00:03:20.557 crypto/uadk: not in enabled drivers build config 00:03:20.557 crypto/virtio: not in enabled drivers build config 00:03:20.557 compress/isal: not in enabled drivers build config 00:03:20.557 compress/mlx5: not in enabled drivers build config 00:03:20.557 compress/nitrox: not in enabled drivers build config 00:03:20.557 compress/octeontx: not in enabled drivers build config 00:03:20.557 compress/zlib: not in enabled drivers build config 00:03:20.557 regex/*: missing internal dependency, "regexdev" 00:03:20.557 ml/*: missing internal dependency, "mldev" 00:03:20.557 vdpa/ifc: not in enabled drivers build config 00:03:20.557 vdpa/mlx5: not in enabled drivers build config 00:03:20.557 vdpa/nfp: not in enabled drivers build config 00:03:20.557 vdpa/sfc: not in enabled drivers build config 00:03:20.557 event/*: missing internal dependency, "eventdev" 00:03:20.557 baseband/*: missing internal dependency, "bbdev" 00:03:20.557 gpu/*: missing internal dependency, "gpudev" 00:03:20.557 00:03:20.557 00:03:21.125 Build targets in project: 85 00:03:21.125 00:03:21.125 DPDK 24.03.0 00:03:21.125 00:03:21.125 User defined options 00:03:21.125 buildtype : debug 00:03:21.125 default_library : static 00:03:21.125 libdir : lib 00:03:21.125 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:03:21.125 c_args : -fPIC -Werror 00:03:21.125 c_link_args : 00:03:21.125 cpu_instruction_set: native 00:03:21.125 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:21.125 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:21.125 enable_docs : false 00:03:21.125 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:21.125 enable_kmods : false 00:03:21.125 max_lcores : 128 00:03:21.126 tests : false 00:03:21.126 00:03:21.126 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:21.391 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:03:21.391 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:21.391 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:21.391 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:21.391 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:21.391 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:21.391 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:21.391 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:21.391 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:21.391 [9/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:21.391 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:21.391 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:21.391 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:21.391 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:21.391 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:21.391 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:21.654 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:21.654 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:21.654 [18/268] Linking static target lib/librte_kvargs.a 00:03:21.654 [19/268] Linking static target lib/librte_log.a 00:03:21.921 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:21.921 [21/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:21.921 [22/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:21.921 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:21.921 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:21.921 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:21.921 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:21.921 [27/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:21.921 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:21.921 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:21.921 [30/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:21.921 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:21.921 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:21.921 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:21.921 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:21.921 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:21.921 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:21.921 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:21.921 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:21.921 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:21.921 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:21.921 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:21.921 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:21.921 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:21.921 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:21.921 [45/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:21.921 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:22.180 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:22.180 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:22.180 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:22.180 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:22.180 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:22.180 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:22.180 [53/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:22.180 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:22.180 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:22.180 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:22.180 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:22.180 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:22.180 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:22.180 [60/268] Linking static target lib/librte_telemetry.a 00:03:22.180 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:22.180 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:22.180 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:22.180 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:22.180 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:22.181 [66/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:22.181 [67/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.181 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:22.181 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:22.181 [70/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:22.181 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:22.181 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:22.181 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:22.181 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:22.181 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:22.181 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:22.181 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:22.181 [78/268] Linking static target lib/librte_ring.a 00:03:22.181 [79/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:22.181 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:22.181 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:22.181 [82/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:22.181 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:22.181 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:22.181 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:22.181 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:22.181 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:22.181 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:22.181 [89/268] Linking static target lib/librte_pci.a 00:03:22.181 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:22.181 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:22.181 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:22.181 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:22.181 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:22.181 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:22.181 [96/268] Linking static target lib/librte_rcu.a 00:03:22.181 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:22.181 [98/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:22.181 [99/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:22.181 [100/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:22.181 [101/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:22.181 [102/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:22.181 [103/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:22.181 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:22.181 [105/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:22.181 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:22.181 [107/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:22.181 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:22.181 [109/268] Linking static target lib/librte_eal.a 00:03:22.181 [110/268] Linking static target lib/librte_mempool.a 00:03:22.444 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:22.444 [112/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:22.444 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:22.444 [114/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:22.444 [115/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:22.444 [116/268] Linking static target lib/librte_mbuf.a 00:03:22.444 [117/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.444 [118/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.707 [119/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:22.707 [120/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:22.707 [121/268] Linking static target lib/librte_meter.a 00:03:22.707 [122/268] Linking static target lib/librte_net.a 00:03:22.707 [123/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.707 [124/268] Linking target lib/librte_log.so.24.1 00:03:22.707 [125/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:22.707 [126/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.707 [127/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:22.707 [128/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:22.707 [129/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:22.707 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:22.707 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:22.707 [132/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:22.707 [133/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:22.707 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:22.707 [135/268] Linking static target lib/librte_timer.a 00:03:22.707 [136/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:22.707 [137/268] Linking static target lib/librte_cmdline.a 00:03:22.707 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:22.707 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:22.707 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:22.707 [141/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:22.707 [142/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:22.707 [143/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:22.707 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:22.707 [145/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.707 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:22.707 [147/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:22.707 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:22.707 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:22.707 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:22.707 [151/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:22.707 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:22.707 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:22.707 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:22.707 [155/268] Linking static target lib/librte_dmadev.a 00:03:22.707 [156/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:22.707 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:22.707 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:22.707 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:22.707 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:22.707 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:22.707 [162/268] Linking static target lib/librte_compressdev.a 00:03:22.966 [163/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:22.966 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:22.966 [165/268] Linking target lib/librte_kvargs.so.24.1 00:03:22.966 [166/268] Linking target lib/librte_telemetry.so.24.1 00:03:22.966 [167/268] Linking static target lib/librte_power.a 00:03:22.966 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:22.966 [169/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:22.966 [170/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:22.966 [171/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.966 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:22.966 [173/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.966 [174/268] Linking static target lib/librte_reorder.a 00:03:22.966 [175/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:22.966 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:22.966 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:22.966 [178/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:22.966 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:22.966 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:22.966 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:22.966 [182/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:22.966 [183/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:22.966 [184/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:22.966 [185/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:22.966 [186/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:22.966 [187/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:22.966 [188/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:22.966 [189/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:22.966 [190/268] Linking static target lib/librte_hash.a 00:03:22.966 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:22.966 [192/268] Linking static target lib/librte_security.a 00:03:22.966 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:22.966 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:22.966 [195/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:22.966 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:22.966 [197/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:22.966 [198/268] Linking static target lib/librte_cryptodev.a 00:03:22.966 [199/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.226 [200/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:23.226 [201/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:23.227 [202/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.227 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:23.227 [204/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:23.227 [205/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.227 [206/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:23.227 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:23.227 [208/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:23.227 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:23.227 [210/268] Linking static target drivers/librte_bus_pci.a 00:03:23.227 [211/268] Linking static target drivers/librte_bus_vdev.a 00:03:23.227 [212/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:23.227 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:23.227 [214/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:23.227 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.227 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:23.227 [217/268] Linking static target drivers/librte_mempool_ring.a 00:03:23.486 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.486 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:23.486 [220/268] Linking static target lib/librte_ethdev.a 00:03:23.486 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.746 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.746 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.746 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.005 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:24.005 [226/268] Linking static target lib/librte_vhost.a 00:03:24.005 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.005 [228/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.005 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.383 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.319 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.887 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.264 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.264 [234/268] Linking target lib/librte_eal.so.24.1 00:03:34.523 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:34.523 [236/268] Linking target lib/librte_pci.so.24.1 00:03:34.523 [237/268] Linking target lib/librte_timer.so.24.1 00:03:34.523 [238/268] Linking target lib/librte_ring.so.24.1 00:03:34.523 [239/268] Linking target lib/librte_meter.so.24.1 00:03:34.523 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:34.523 [241/268] Linking target lib/librte_dmadev.so.24.1 00:03:34.781 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:34.781 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:34.781 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:34.781 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:34.781 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:34.781 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:34.781 [248/268] Linking target lib/librte_rcu.so.24.1 00:03:34.781 [249/268] Linking target lib/librte_mempool.so.24.1 00:03:34.781 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:34.781 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:35.039 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:35.040 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:35.040 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:35.298 [255/268] Linking target lib/librte_reorder.so.24.1 00:03:35.298 [256/268] Linking target lib/librte_net.so.24.1 00:03:35.298 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:03:35.298 [258/268] Linking target lib/librte_compressdev.so.24.1 00:03:35.298 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:35.298 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:35.298 [261/268] Linking target lib/librte_hash.so.24.1 00:03:35.298 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:35.298 [263/268] Linking target lib/librte_security.so.24.1 00:03:35.298 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:35.557 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:35.557 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:35.557 [267/268] Linking target lib/librte_power.so.24.1 00:03:35.557 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:35.557 INFO: autodetecting backend as ninja 00:03:35.557 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:03:36.933 CC lib/ut/ut.o 00:03:36.933 CC lib/log/log.o 00:03:36.933 CC lib/log/log_flags.o 00:03:36.933 CC lib/log/log_deprecated.o 00:03:36.933 CC lib/ut_mock/mock.o 00:03:36.933 LIB libspdk_ut.a 00:03:36.933 LIB libspdk_log.a 00:03:36.933 LIB libspdk_ut_mock.a 00:03:37.191 CC lib/util/base64.o 00:03:37.191 CC lib/util/bit_array.o 00:03:37.191 CC lib/util/cpuset.o 00:03:37.191 CC lib/util/crc16.o 00:03:37.191 CC lib/util/crc32.o 00:03:37.191 CC lib/util/crc32c.o 00:03:37.191 CC lib/util/crc32_ieee.o 00:03:37.191 CC lib/util/fd.o 00:03:37.191 CC lib/util/crc64.o 00:03:37.191 CC lib/util/file.o 00:03:37.191 CC lib/util/dif.o 00:03:37.191 CC lib/util/fd_group.o 00:03:37.191 CC lib/util/math.o 00:03:37.191 CC lib/util/hexlify.o 00:03:37.191 CC lib/util/net.o 00:03:37.191 CC lib/util/iov.o 00:03:37.191 CC lib/util/pipe.o 00:03:37.191 CC lib/util/strerror_tls.o 00:03:37.191 CC lib/util/xor.o 00:03:37.191 CC lib/util/string.o 00:03:37.191 CC lib/util/uuid.o 00:03:37.191 CC lib/util/zipf.o 00:03:37.191 CC lib/util/md5.o 00:03:37.191 CXX lib/trace_parser/trace.o 00:03:37.191 CC lib/dma/dma.o 00:03:37.191 CC lib/ioat/ioat.o 00:03:37.191 LIB libspdk_dma.a 00:03:37.191 CC lib/vfio_user/host/vfio_user_pci.o 00:03:37.191 CC lib/vfio_user/host/vfio_user.o 00:03:37.449 LIB libspdk_ioat.a 00:03:37.449 LIB libspdk_vfio_user.a 00:03:37.449 LIB libspdk_util.a 00:03:37.708 LIB libspdk_trace_parser.a 00:03:37.708 CC lib/conf/conf.o 00:03:37.708 CC lib/idxd/idxd.o 00:03:37.708 CC lib/idxd/idxd_user.o 00:03:37.708 CC lib/idxd/idxd_kernel.o 00:03:37.708 CC lib/rdma_utils/rdma_utils.o 00:03:37.708 CC lib/json/json_util.o 00:03:37.708 CC lib/json/json_parse.o 00:03:37.708 CC lib/json/json_write.o 00:03:37.708 CC lib/env_dpdk/env.o 00:03:37.708 CC lib/vmd/vmd.o 00:03:37.708 CC lib/env_dpdk/memory.o 00:03:37.708 CC lib/vmd/led.o 00:03:37.708 CC lib/env_dpdk/pci.o 00:03:37.708 CC lib/env_dpdk/init.o 00:03:37.708 CC lib/env_dpdk/threads.o 00:03:37.708 CC lib/env_dpdk/pci_ioat.o 00:03:37.708 CC lib/env_dpdk/pci_virtio.o 00:03:37.708 CC lib/env_dpdk/pci_vmd.o 00:03:37.708 CC lib/env_dpdk/pci_idxd.o 00:03:37.708 CC lib/env_dpdk/pci_event.o 00:03:37.708 CC lib/env_dpdk/sigbus_handler.o 00:03:37.708 CC lib/env_dpdk/pci_dpdk.o 00:03:37.708 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:37.708 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:37.966 LIB libspdk_conf.a 00:03:37.966 LIB libspdk_rdma_utils.a 00:03:37.966 LIB libspdk_json.a 00:03:38.224 LIB libspdk_idxd.a 00:03:38.224 LIB libspdk_vmd.a 00:03:38.224 CC lib/rdma_provider/common.o 00:03:38.224 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:38.224 CC lib/jsonrpc/jsonrpc_server.o 00:03:38.224 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:38.224 CC lib/jsonrpc/jsonrpc_client.o 00:03:38.224 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:38.483 LIB libspdk_rdma_provider.a 00:03:38.483 LIB libspdk_jsonrpc.a 00:03:38.741 LIB libspdk_env_dpdk.a 00:03:38.998 CC lib/rpc/rpc.o 00:03:38.998 LIB libspdk_rpc.a 00:03:39.257 CC lib/trace/trace_flags.o 00:03:39.257 CC lib/trace/trace.o 00:03:39.257 CC lib/trace/trace_rpc.o 00:03:39.257 CC lib/notify/notify.o 00:03:39.257 CC lib/notify/notify_rpc.o 00:03:39.257 CC lib/keyring/keyring.o 00:03:39.515 CC lib/keyring/keyring_rpc.o 00:03:39.515 LIB libspdk_trace.a 00:03:39.515 LIB libspdk_notify.a 00:03:39.515 LIB libspdk_keyring.a 00:03:39.773 CC lib/thread/thread.o 00:03:39.773 CC lib/sock/sock.o 00:03:39.774 CC lib/thread/iobuf.o 00:03:39.774 CC lib/sock/sock_rpc.o 00:03:40.034 LIB libspdk_sock.a 00:03:40.296 CC lib/nvme/nvme_fabric.o 00:03:40.296 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:40.296 CC lib/nvme/nvme_ctrlr.o 00:03:40.296 CC lib/nvme/nvme_ns_cmd.o 00:03:40.296 CC lib/nvme/nvme_ns.o 00:03:40.296 CC lib/nvme/nvme_pcie_common.o 00:03:40.296 CC lib/nvme/nvme.o 00:03:40.296 CC lib/nvme/nvme_pcie.o 00:03:40.296 CC lib/nvme/nvme_qpair.o 00:03:40.296 CC lib/nvme/nvme_quirks.o 00:03:40.296 CC lib/nvme/nvme_transport.o 00:03:40.296 CC lib/nvme/nvme_discovery.o 00:03:40.296 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:40.296 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:40.296 CC lib/nvme/nvme_poll_group.o 00:03:40.296 CC lib/nvme/nvme_tcp.o 00:03:40.296 CC lib/nvme/nvme_opal.o 00:03:40.296 CC lib/nvme/nvme_io_msg.o 00:03:40.296 CC lib/nvme/nvme_auth.o 00:03:40.296 CC lib/nvme/nvme_stubs.o 00:03:40.296 CC lib/nvme/nvme_zns.o 00:03:40.296 CC lib/nvme/nvme_vfio_user.o 00:03:40.296 CC lib/nvme/nvme_cuse.o 00:03:40.296 CC lib/nvme/nvme_rdma.o 00:03:40.555 LIB libspdk_thread.a 00:03:40.813 CC lib/init/json_config.o 00:03:40.813 CC lib/init/subsystem.o 00:03:40.813 CC lib/init/subsystem_rpc.o 00:03:40.813 CC lib/init/rpc.o 00:03:40.813 CC lib/blob/blobstore.o 00:03:40.813 CC lib/accel/accel.o 00:03:40.814 CC lib/blob/request.o 00:03:40.814 CC lib/accel/accel_rpc.o 00:03:40.814 CC lib/blob/zeroes.o 00:03:40.814 CC lib/accel/accel_sw.o 00:03:40.814 CC lib/blob/blob_bs_dev.o 00:03:40.814 CC lib/fsdev/fsdev.o 00:03:40.814 CC lib/vfu_tgt/tgt_endpoint.o 00:03:40.814 CC lib/vfu_tgt/tgt_rpc.o 00:03:40.814 CC lib/fsdev/fsdev_rpc.o 00:03:40.814 CC lib/fsdev/fsdev_io.o 00:03:40.814 CC lib/virtio/virtio.o 00:03:40.814 CC lib/virtio/virtio_vhost_user.o 00:03:40.814 CC lib/virtio/virtio_pci.o 00:03:40.814 CC lib/virtio/virtio_vfio_user.o 00:03:41.073 LIB libspdk_init.a 00:03:41.073 LIB libspdk_vfu_tgt.a 00:03:41.073 LIB libspdk_virtio.a 00:03:41.332 LIB libspdk_fsdev.a 00:03:41.332 CC lib/event/app.o 00:03:41.332 CC lib/event/app_rpc.o 00:03:41.332 CC lib/event/reactor.o 00:03:41.332 CC lib/event/scheduler_static.o 00:03:41.332 CC lib/event/log_rpc.o 00:03:41.591 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:41.591 LIB libspdk_event.a 00:03:41.591 LIB libspdk_accel.a 00:03:41.851 LIB libspdk_nvme.a 00:03:41.851 LIB libspdk_fuse_dispatcher.a 00:03:42.110 CC lib/bdev/bdev.o 00:03:42.110 CC lib/bdev/part.o 00:03:42.110 CC lib/bdev/bdev_rpc.o 00:03:42.110 CC lib/bdev/scsi_nvme.o 00:03:42.110 CC lib/bdev/bdev_zone.o 00:03:42.679 LIB libspdk_blob.a 00:03:42.939 CC lib/lvol/lvol.o 00:03:42.939 CC lib/blobfs/blobfs.o 00:03:42.939 CC lib/blobfs/tree.o 00:03:43.507 LIB libspdk_lvol.a 00:03:43.507 LIB libspdk_blobfs.a 00:03:43.766 LIB libspdk_bdev.a 00:03:44.025 CC lib/ublk/ublk.o 00:03:44.025 CC lib/ublk/ublk_rpc.o 00:03:44.025 CC lib/nbd/nbd_rpc.o 00:03:44.025 CC lib/nbd/nbd.o 00:03:44.025 CC lib/nvmf/ctrlr_bdev.o 00:03:44.025 CC lib/nvmf/ctrlr.o 00:03:44.025 CC lib/nvmf/ctrlr_discovery.o 00:03:44.025 CC lib/nvmf/subsystem.o 00:03:44.025 CC lib/nvmf/nvmf_rpc.o 00:03:44.025 CC lib/nvmf/nvmf.o 00:03:44.025 CC lib/nvmf/tcp.o 00:03:44.025 CC lib/nvmf/transport.o 00:03:44.025 CC lib/nvmf/stubs.o 00:03:44.025 CC lib/ftl/ftl_core.o 00:03:44.025 CC lib/nvmf/mdns_server.o 00:03:44.025 CC lib/ftl/ftl_init.o 00:03:44.025 CC lib/nvmf/vfio_user.o 00:03:44.025 CC lib/ftl/ftl_layout.o 00:03:44.025 CC lib/scsi/dev.o 00:03:44.025 CC lib/nvmf/auth.o 00:03:44.025 CC lib/nvmf/rdma.o 00:03:44.025 CC lib/ftl/ftl_debug.o 00:03:44.025 CC lib/ftl/ftl_io.o 00:03:44.025 CC lib/scsi/lun.o 00:03:44.025 CC lib/ftl/ftl_sb.o 00:03:44.025 CC lib/scsi/port.o 00:03:44.025 CC lib/ftl/ftl_l2p.o 00:03:44.025 CC lib/scsi/scsi.o 00:03:44.025 CC lib/scsi/scsi_bdev.o 00:03:44.025 CC lib/ftl/ftl_l2p_flat.o 00:03:44.025 CC lib/ftl/ftl_nv_cache.o 00:03:44.025 CC lib/scsi/scsi_pr.o 00:03:44.025 CC lib/ftl/ftl_band.o 00:03:44.025 CC lib/scsi/scsi_rpc.o 00:03:44.025 CC lib/ftl/ftl_band_ops.o 00:03:44.025 CC lib/scsi/task.o 00:03:44.025 CC lib/ftl/ftl_writer.o 00:03:44.025 CC lib/ftl/ftl_rq.o 00:03:44.025 CC lib/ftl/ftl_p2l.o 00:03:44.025 CC lib/ftl/ftl_reloc.o 00:03:44.025 CC lib/ftl/ftl_l2p_cache.o 00:03:44.025 CC lib/ftl/ftl_p2l_log.o 00:03:44.025 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:44.025 CC lib/ftl/mngt/ftl_mngt.o 00:03:44.025 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:44.025 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:44.025 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:44.025 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:44.285 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:44.285 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:44.285 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:44.285 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:44.285 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:44.285 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:44.285 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:44.285 CC lib/ftl/utils/ftl_conf.o 00:03:44.285 CC lib/ftl/utils/ftl_md.o 00:03:44.285 CC lib/ftl/utils/ftl_mempool.o 00:03:44.285 CC lib/ftl/utils/ftl_bitmap.o 00:03:44.285 CC lib/ftl/utils/ftl_property.o 00:03:44.285 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:44.285 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:44.285 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:44.285 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:44.285 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:44.285 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:44.285 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:44.285 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:44.285 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:44.285 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:44.285 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:44.285 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:44.285 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:44.285 CC lib/ftl/base/ftl_base_dev.o 00:03:44.285 CC lib/ftl/base/ftl_base_bdev.o 00:03:44.285 CC lib/ftl/ftl_trace.o 00:03:44.543 LIB libspdk_nbd.a 00:03:44.801 LIB libspdk_scsi.a 00:03:44.801 LIB libspdk_ublk.a 00:03:44.801 LIB libspdk_ftl.a 00:03:45.059 CC lib/iscsi/conn.o 00:03:45.059 CC lib/iscsi/init_grp.o 00:03:45.059 CC lib/iscsi/portal_grp.o 00:03:45.059 CC lib/iscsi/iscsi.o 00:03:45.059 CC lib/vhost/vhost.o 00:03:45.059 CC lib/iscsi/param.o 00:03:45.059 CC lib/iscsi/iscsi_subsystem.o 00:03:45.059 CC lib/iscsi/tgt_node.o 00:03:45.059 CC lib/vhost/vhost_rpc.o 00:03:45.059 CC lib/iscsi/iscsi_rpc.o 00:03:45.059 CC lib/vhost/vhost_scsi.o 00:03:45.059 CC lib/iscsi/task.o 00:03:45.059 CC lib/vhost/vhost_blk.o 00:03:45.059 CC lib/vhost/rte_vhost_user.o 00:03:45.626 LIB libspdk_nvmf.a 00:03:45.626 LIB libspdk_vhost.a 00:03:45.885 LIB libspdk_iscsi.a 00:03:46.451 CC module/vfu_device/vfu_virtio.o 00:03:46.451 CC module/vfu_device/vfu_virtio_blk.o 00:03:46.451 CC module/vfu_device/vfu_virtio_fs.o 00:03:46.451 CC module/vfu_device/vfu_virtio_rpc.o 00:03:46.451 CC module/vfu_device/vfu_virtio_scsi.o 00:03:46.451 CC module/env_dpdk/env_dpdk_rpc.o 00:03:46.451 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:46.451 CC module/keyring/file/keyring.o 00:03:46.451 CC module/keyring/file/keyring_rpc.o 00:03:46.451 LIB libspdk_env_dpdk_rpc.a 00:03:46.451 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:46.451 CC module/accel/iaa/accel_iaa_rpc.o 00:03:46.451 CC module/accel/iaa/accel_iaa.o 00:03:46.451 CC module/accel/dsa/accel_dsa.o 00:03:46.451 CC module/accel/dsa/accel_dsa_rpc.o 00:03:46.451 CC module/keyring/linux/keyring.o 00:03:46.451 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:46.451 CC module/fsdev/aio/linux_aio_mgr.o 00:03:46.451 CC module/fsdev/aio/fsdev_aio.o 00:03:46.451 CC module/keyring/linux/keyring_rpc.o 00:03:46.451 CC module/accel/error/accel_error_rpc.o 00:03:46.451 CC module/accel/error/accel_error.o 00:03:46.451 CC module/sock/posix/posix.o 00:03:46.451 CC module/scheduler/gscheduler/gscheduler.o 00:03:46.451 CC module/accel/ioat/accel_ioat.o 00:03:46.451 CC module/accel/ioat/accel_ioat_rpc.o 00:03:46.451 CC module/blob/bdev/blob_bdev.o 00:03:46.451 LIB libspdk_keyring_file.a 00:03:46.451 LIB libspdk_scheduler_dpdk_governor.a 00:03:46.451 LIB libspdk_keyring_linux.a 00:03:46.451 LIB libspdk_scheduler_dynamic.a 00:03:46.451 LIB libspdk_scheduler_gscheduler.a 00:03:46.709 LIB libspdk_accel_iaa.a 00:03:46.709 LIB libspdk_accel_error.a 00:03:46.709 LIB libspdk_accel_ioat.a 00:03:46.709 LIB libspdk_blob_bdev.a 00:03:46.709 LIB libspdk_accel_dsa.a 00:03:46.709 LIB libspdk_vfu_device.a 00:03:46.967 LIB libspdk_sock_posix.a 00:03:46.967 LIB libspdk_fsdev_aio.a 00:03:46.967 CC module/bdev/aio/bdev_aio.o 00:03:46.967 CC module/bdev/aio/bdev_aio_rpc.o 00:03:46.967 CC module/bdev/raid/bdev_raid.o 00:03:46.967 CC module/bdev/raid/bdev_raid_rpc.o 00:03:46.967 CC module/bdev/raid/raid0.o 00:03:46.967 CC module/bdev/raid/bdev_raid_sb.o 00:03:46.967 CC module/bdev/raid/raid1.o 00:03:46.967 CC module/bdev/raid/concat.o 00:03:46.967 CC module/bdev/error/vbdev_error_rpc.o 00:03:46.967 CC module/bdev/error/vbdev_error.o 00:03:46.967 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:46.968 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:46.968 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:46.968 CC module/bdev/delay/vbdev_delay.o 00:03:46.968 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:46.968 CC module/bdev/passthru/vbdev_passthru.o 00:03:46.968 CC module/bdev/malloc/bdev_malloc.o 00:03:46.968 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:46.968 CC module/bdev/ftl/bdev_ftl.o 00:03:46.968 CC module/bdev/null/bdev_null_rpc.o 00:03:46.968 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:46.968 CC module/bdev/gpt/gpt.o 00:03:46.968 CC module/bdev/null/bdev_null.o 00:03:46.968 CC module/bdev/gpt/vbdev_gpt.o 00:03:46.968 CC module/bdev/iscsi/bdev_iscsi.o 00:03:46.968 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:46.968 CC module/bdev/lvol/vbdev_lvol.o 00:03:46.968 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:46.968 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:46.968 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:46.968 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:46.968 CC module/bdev/split/vbdev_split.o 00:03:46.968 CC module/bdev/nvme/bdev_nvme.o 00:03:46.968 CC module/bdev/split/vbdev_split_rpc.o 00:03:46.968 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:46.968 CC module/bdev/nvme/nvme_rpc.o 00:03:46.968 CC module/bdev/nvme/bdev_mdns_client.o 00:03:46.968 CC module/bdev/nvme/vbdev_opal.o 00:03:46.968 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:46.968 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:47.226 CC module/blobfs/bdev/blobfs_bdev.o 00:03:47.226 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:47.226 LIB libspdk_bdev_error.a 00:03:47.226 LIB libspdk_bdev_null.a 00:03:47.226 LIB libspdk_bdev_aio.a 00:03:47.226 LIB libspdk_bdev_split.a 00:03:47.226 LIB libspdk_bdev_passthru.a 00:03:47.226 LIB libspdk_bdev_zone_block.a 00:03:47.226 LIB libspdk_bdev_gpt.a 00:03:47.226 LIB libspdk_bdev_ftl.a 00:03:47.226 LIB libspdk_bdev_iscsi.a 00:03:47.226 LIB libspdk_bdev_delay.a 00:03:47.226 LIB libspdk_blobfs_bdev.a 00:03:47.485 LIB libspdk_bdev_malloc.a 00:03:47.485 LIB libspdk_bdev_lvol.a 00:03:47.485 LIB libspdk_bdev_virtio.a 00:03:47.745 LIB libspdk_bdev_raid.a 00:03:48.681 LIB libspdk_bdev_nvme.a 00:03:49.246 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:49.246 CC module/event/subsystems/sock/sock.o 00:03:49.246 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:49.246 CC module/event/subsystems/keyring/keyring.o 00:03:49.246 CC module/event/subsystems/fsdev/fsdev.o 00:03:49.246 CC module/event/subsystems/scheduler/scheduler.o 00:03:49.246 CC module/event/subsystems/iobuf/iobuf.o 00:03:49.246 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:49.246 CC module/event/subsystems/vmd/vmd.o 00:03:49.246 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:49.246 LIB libspdk_event_vfu_tgt.a 00:03:49.246 LIB libspdk_event_vhost_blk.a 00:03:49.246 LIB libspdk_event_sock.a 00:03:49.246 LIB libspdk_event_keyring.a 00:03:49.246 LIB libspdk_event_scheduler.a 00:03:49.246 LIB libspdk_event_vmd.a 00:03:49.504 LIB libspdk_event_fsdev.a 00:03:49.504 LIB libspdk_event_iobuf.a 00:03:49.778 CC module/event/subsystems/accel/accel.o 00:03:49.778 LIB libspdk_event_accel.a 00:03:50.058 CC module/event/subsystems/bdev/bdev.o 00:03:50.340 LIB libspdk_event_bdev.a 00:03:50.629 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:50.629 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:50.629 CC module/event/subsystems/nbd/nbd.o 00:03:50.629 CC module/event/subsystems/scsi/scsi.o 00:03:50.629 CC module/event/subsystems/ublk/ublk.o 00:03:50.629 LIB libspdk_event_nbd.a 00:03:50.629 LIB libspdk_event_ublk.a 00:03:50.629 LIB libspdk_event_scsi.a 00:03:50.629 LIB libspdk_event_nvmf.a 00:03:50.932 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:51.192 CC module/event/subsystems/iscsi/iscsi.o 00:03:51.192 LIB libspdk_event_vhost_scsi.a 00:03:51.192 LIB libspdk_event_iscsi.a 00:03:51.450 CC app/spdk_lspci/spdk_lspci.o 00:03:51.450 CXX app/trace/trace.o 00:03:51.450 CC app/spdk_top/spdk_top.o 00:03:51.450 CC test/rpc_client/rpc_client_test.o 00:03:51.450 CC app/trace_record/trace_record.o 00:03:51.450 CC app/spdk_nvme_identify/identify.o 00:03:51.450 TEST_HEADER include/spdk/assert.h 00:03:51.450 TEST_HEADER include/spdk/base64.h 00:03:51.450 TEST_HEADER include/spdk/accel_module.h 00:03:51.450 TEST_HEADER include/spdk/barrier.h 00:03:51.450 TEST_HEADER include/spdk/accel.h 00:03:51.450 TEST_HEADER include/spdk/bdev_module.h 00:03:51.450 TEST_HEADER include/spdk/bit_array.h 00:03:51.450 CC app/spdk_nvme_perf/perf.o 00:03:51.450 TEST_HEADER include/spdk/bdev.h 00:03:51.450 TEST_HEADER include/spdk/bdev_zone.h 00:03:51.450 TEST_HEADER include/spdk/bit_pool.h 00:03:51.450 TEST_HEADER include/spdk/blob_bdev.h 00:03:51.450 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:51.450 TEST_HEADER include/spdk/blobfs.h 00:03:51.450 TEST_HEADER include/spdk/conf.h 00:03:51.450 TEST_HEADER include/spdk/blob.h 00:03:51.450 TEST_HEADER include/spdk/config.h 00:03:51.450 TEST_HEADER include/spdk/cpuset.h 00:03:51.450 TEST_HEADER include/spdk/crc16.h 00:03:51.450 TEST_HEADER include/spdk/crc32.h 00:03:51.450 CC app/spdk_nvme_discover/discovery_aer.o 00:03:51.450 TEST_HEADER include/spdk/crc64.h 00:03:51.450 TEST_HEADER include/spdk/dma.h 00:03:51.450 TEST_HEADER include/spdk/endian.h 00:03:51.450 TEST_HEADER include/spdk/env_dpdk.h 00:03:51.450 TEST_HEADER include/spdk/dif.h 00:03:51.450 TEST_HEADER include/spdk/env.h 00:03:51.450 TEST_HEADER include/spdk/event.h 00:03:51.450 TEST_HEADER include/spdk/fd.h 00:03:51.450 TEST_HEADER include/spdk/file.h 00:03:51.450 TEST_HEADER include/spdk/fd_group.h 00:03:51.450 TEST_HEADER include/spdk/fsdev.h 00:03:51.450 TEST_HEADER include/spdk/fsdev_module.h 00:03:51.450 TEST_HEADER include/spdk/ftl.h 00:03:51.450 TEST_HEADER include/spdk/gpt_spec.h 00:03:51.450 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:51.450 TEST_HEADER include/spdk/idxd_spec.h 00:03:51.450 TEST_HEADER include/spdk/idxd.h 00:03:51.450 TEST_HEADER include/spdk/hexlify.h 00:03:51.450 TEST_HEADER include/spdk/histogram_data.h 00:03:51.450 TEST_HEADER include/spdk/init.h 00:03:51.450 TEST_HEADER include/spdk/ioat.h 00:03:51.450 TEST_HEADER include/spdk/ioat_spec.h 00:03:51.450 TEST_HEADER include/spdk/iscsi_spec.h 00:03:51.450 TEST_HEADER include/spdk/jsonrpc.h 00:03:51.450 TEST_HEADER include/spdk/keyring.h 00:03:51.450 TEST_HEADER include/spdk/keyring_module.h 00:03:51.450 TEST_HEADER include/spdk/likely.h 00:03:51.450 TEST_HEADER include/spdk/json.h 00:03:51.450 TEST_HEADER include/spdk/log.h 00:03:51.450 TEST_HEADER include/spdk/lvol.h 00:03:51.450 TEST_HEADER include/spdk/md5.h 00:03:51.450 TEST_HEADER include/spdk/memory.h 00:03:51.450 TEST_HEADER include/spdk/nbd.h 00:03:51.451 TEST_HEADER include/spdk/mmio.h 00:03:51.451 TEST_HEADER include/spdk/net.h 00:03:51.451 TEST_HEADER include/spdk/notify.h 00:03:51.451 TEST_HEADER include/spdk/nvme.h 00:03:51.451 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:51.451 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:51.451 TEST_HEADER include/spdk/nvme_spec.h 00:03:51.451 TEST_HEADER include/spdk/nvme_intel.h 00:03:51.451 TEST_HEADER include/spdk/nvme_zns.h 00:03:51.451 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:51.451 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:51.451 TEST_HEADER include/spdk/nvmf.h 00:03:51.451 TEST_HEADER include/spdk/nvmf_spec.h 00:03:51.451 TEST_HEADER include/spdk/nvmf_transport.h 00:03:51.451 TEST_HEADER include/spdk/opal_spec.h 00:03:51.451 TEST_HEADER include/spdk/pci_ids.h 00:03:51.451 TEST_HEADER include/spdk/opal.h 00:03:51.451 TEST_HEADER include/spdk/pipe.h 00:03:51.451 TEST_HEADER include/spdk/reduce.h 00:03:51.451 TEST_HEADER include/spdk/queue.h 00:03:51.451 TEST_HEADER include/spdk/rpc.h 00:03:51.451 TEST_HEADER include/spdk/scheduler.h 00:03:51.451 TEST_HEADER include/spdk/scsi.h 00:03:51.451 TEST_HEADER include/spdk/scsi_spec.h 00:03:51.451 TEST_HEADER include/spdk/sock.h 00:03:51.451 TEST_HEADER include/spdk/stdinc.h 00:03:51.451 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:51.451 TEST_HEADER include/spdk/string.h 00:03:51.451 TEST_HEADER include/spdk/thread.h 00:03:51.451 TEST_HEADER include/spdk/trace.h 00:03:51.451 TEST_HEADER include/spdk/trace_parser.h 00:03:51.451 TEST_HEADER include/spdk/tree.h 00:03:51.451 TEST_HEADER include/spdk/ublk.h 00:03:51.451 TEST_HEADER include/spdk/util.h 00:03:51.451 TEST_HEADER include/spdk/uuid.h 00:03:51.451 TEST_HEADER include/spdk/version.h 00:03:51.451 CC app/nvmf_tgt/nvmf_main.o 00:03:51.451 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:51.451 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:51.711 TEST_HEADER include/spdk/vhost.h 00:03:51.711 TEST_HEADER include/spdk/vmd.h 00:03:51.711 TEST_HEADER include/spdk/xor.h 00:03:51.711 TEST_HEADER include/spdk/zipf.h 00:03:51.711 CXX test/cpp_headers/accel.o 00:03:51.711 CXX test/cpp_headers/accel_module.o 00:03:51.711 CXX test/cpp_headers/assert.o 00:03:51.711 CXX test/cpp_headers/barrier.o 00:03:51.711 CXX test/cpp_headers/base64.o 00:03:51.711 CC app/spdk_dd/spdk_dd.o 00:03:51.711 CXX test/cpp_headers/bdev.o 00:03:51.711 CXX test/cpp_headers/bdev_module.o 00:03:51.711 CXX test/cpp_headers/bit_array.o 00:03:51.711 CXX test/cpp_headers/bdev_zone.o 00:03:51.711 CXX test/cpp_headers/bit_pool.o 00:03:51.711 CXX test/cpp_headers/blob_bdev.o 00:03:51.711 CXX test/cpp_headers/blobfs_bdev.o 00:03:51.711 CXX test/cpp_headers/blobfs.o 00:03:51.711 CXX test/cpp_headers/conf.o 00:03:51.711 CXX test/cpp_headers/blob.o 00:03:51.711 CXX test/cpp_headers/config.o 00:03:51.711 CXX test/cpp_headers/cpuset.o 00:03:51.711 CXX test/cpp_headers/crc16.o 00:03:51.711 CXX test/cpp_headers/crc32.o 00:03:51.711 CXX test/cpp_headers/crc64.o 00:03:51.711 CXX test/cpp_headers/dif.o 00:03:51.711 CXX test/cpp_headers/dma.o 00:03:51.711 CXX test/cpp_headers/endian.o 00:03:51.711 CXX test/cpp_headers/env_dpdk.o 00:03:51.711 CXX test/cpp_headers/env.o 00:03:51.711 CXX test/cpp_headers/event.o 00:03:51.711 CXX test/cpp_headers/fd_group.o 00:03:51.711 CXX test/cpp_headers/fd.o 00:03:51.712 CXX test/cpp_headers/file.o 00:03:51.712 CXX test/cpp_headers/fsdev.o 00:03:51.712 CXX test/cpp_headers/fsdev_module.o 00:03:51.712 CXX test/cpp_headers/ftl.o 00:03:51.712 CXX test/cpp_headers/fuse_dispatcher.o 00:03:51.712 CXX test/cpp_headers/gpt_spec.o 00:03:51.712 CXX test/cpp_headers/hexlify.o 00:03:51.712 CXX test/cpp_headers/histogram_data.o 00:03:51.712 CXX test/cpp_headers/idxd.o 00:03:51.712 CXX test/cpp_headers/idxd_spec.o 00:03:51.712 CXX test/cpp_headers/init.o 00:03:51.712 CXX test/cpp_headers/ioat.o 00:03:51.712 CXX test/cpp_headers/ioat_spec.o 00:03:51.712 CC app/iscsi_tgt/iscsi_tgt.o 00:03:51.712 CC test/env/vtophys/vtophys.o 00:03:51.712 CC examples/util/zipf/zipf.o 00:03:51.712 CC app/spdk_tgt/spdk_tgt.o 00:03:51.712 CC test/env/memory/memory_ut.o 00:03:51.712 CC test/thread/poller_perf/poller_perf.o 00:03:51.712 CXX test/cpp_headers/iscsi_spec.o 00:03:51.712 CC test/env/pci/pci_ut.o 00:03:51.712 CC examples/ioat/perf/perf.o 00:03:51.712 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:51.712 CC test/app/jsoncat/jsoncat.o 00:03:51.712 CC test/thread/lock/spdk_lock.o 00:03:51.712 CC test/app/histogram_perf/histogram_perf.o 00:03:51.712 CC test/app/stub/stub.o 00:03:51.712 CC app/fio/nvme/fio_plugin.o 00:03:51.712 CC examples/ioat/verify/verify.o 00:03:51.712 LINK spdk_lspci 00:03:51.712 CC test/dma/test_dma/test_dma.o 00:03:51.712 CC test/app/bdev_svc/bdev_svc.o 00:03:51.712 CC app/fio/bdev/fio_plugin.o 00:03:51.712 CC test/env/mem_callbacks/mem_callbacks.o 00:03:51.712 LINK rpc_client_test 00:03:51.712 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:51.712 LINK spdk_nvme_discover 00:03:51.712 CXX test/cpp_headers/json.o 00:03:51.712 LINK spdk_trace_record 00:03:51.712 CXX test/cpp_headers/jsonrpc.o 00:03:51.712 LINK vtophys 00:03:51.712 CXX test/cpp_headers/keyring.o 00:03:51.712 CXX test/cpp_headers/keyring_module.o 00:03:51.712 CXX test/cpp_headers/likely.o 00:03:51.712 CXX test/cpp_headers/log.o 00:03:51.712 CXX test/cpp_headers/lvol.o 00:03:51.712 CXX test/cpp_headers/md5.o 00:03:51.712 CXX test/cpp_headers/memory.o 00:03:51.712 LINK jsoncat 00:03:51.712 CXX test/cpp_headers/mmio.o 00:03:51.712 CXX test/cpp_headers/nbd.o 00:03:51.712 CXX test/cpp_headers/net.o 00:03:51.712 CXX test/cpp_headers/notify.o 00:03:51.712 CXX test/cpp_headers/nvme.o 00:03:51.712 CXX test/cpp_headers/nvme_intel.o 00:03:51.712 CXX test/cpp_headers/nvme_ocssd.o 00:03:51.712 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:51.712 CXX test/cpp_headers/nvme_spec.o 00:03:51.712 CXX test/cpp_headers/nvme_zns.o 00:03:51.712 CXX test/cpp_headers/nvmf_cmd.o 00:03:51.712 LINK zipf 00:03:51.712 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:51.712 CXX test/cpp_headers/nvmf.o 00:03:51.712 CXX test/cpp_headers/nvmf_spec.o 00:03:51.712 CXX test/cpp_headers/nvmf_transport.o 00:03:51.712 CXX test/cpp_headers/opal.o 00:03:51.712 CXX test/cpp_headers/opal_spec.o 00:03:51.712 CXX test/cpp_headers/pci_ids.o 00:03:51.974 LINK poller_perf 00:03:51.974 LINK interrupt_tgt 00:03:51.974 CXX test/cpp_headers/pipe.o 00:03:51.974 LINK histogram_perf 00:03:51.974 CXX test/cpp_headers/queue.o 00:03:51.974 CXX test/cpp_headers/reduce.o 00:03:51.974 CXX test/cpp_headers/rpc.o 00:03:51.974 LINK nvmf_tgt 00:03:51.974 CXX test/cpp_headers/scheduler.o 00:03:51.974 CXX test/cpp_headers/scsi.o 00:03:51.974 LINK env_dpdk_post_init 00:03:51.974 CXX test/cpp_headers/scsi_spec.o 00:03:51.974 CXX test/cpp_headers/sock.o 00:03:51.974 CXX test/cpp_headers/stdinc.o 00:03:51.974 CXX test/cpp_headers/string.o 00:03:51.974 CXX test/cpp_headers/thread.o 00:03:51.974 CXX test/cpp_headers/trace.o 00:03:51.974 LINK stub 00:03:51.974 LINK iscsi_tgt 00:03:51.974 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:51.974 LINK ioat_perf 00:03:51.974 LINK verify 00:03:51.974 LINK spdk_tgt 00:03:51.974 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:51.974 LINK bdev_svc 00:03:51.974 LINK spdk_trace 00:03:51.974 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:03:51.974 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:03:51.974 CXX test/cpp_headers/trace_parser.o 00:03:51.974 CXX test/cpp_headers/tree.o 00:03:51.974 CXX test/cpp_headers/ublk.o 00:03:51.974 CXX test/cpp_headers/util.o 00:03:51.974 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:51.974 CXX test/cpp_headers/uuid.o 00:03:51.974 CXX test/cpp_headers/version.o 00:03:51.974 CXX test/cpp_headers/vfio_user_pci.o 00:03:51.974 CXX test/cpp_headers/vfio_user_spec.o 00:03:51.974 CXX test/cpp_headers/vhost.o 00:03:51.974 CXX test/cpp_headers/vmd.o 00:03:51.974 CXX test/cpp_headers/xor.o 00:03:51.974 CXX test/cpp_headers/zipf.o 00:03:52.234 LINK spdk_dd 00:03:52.234 LINK pci_ut 00:03:52.234 LINK nvme_fuzz 00:03:52.234 LINK test_dma 00:03:52.234 LINK llvm_vfio_fuzz 00:03:52.234 LINK mem_callbacks 00:03:52.234 LINK spdk_nvme 00:03:52.234 LINK spdk_nvme_identify 00:03:52.234 LINK spdk_bdev 00:03:52.492 LINK spdk_top 00:03:52.492 LINK vhost_fuzz 00:03:52.492 CC examples/idxd/perf/perf.o 00:03:52.492 LINK spdk_nvme_perf 00:03:52.492 LINK llvm_nvme_fuzz 00:03:52.492 CC examples/vmd/led/led.o 00:03:52.492 CC examples/sock/hello_world/hello_sock.o 00:03:52.492 CC app/vhost/vhost.o 00:03:52.492 CC examples/vmd/lsvmd/lsvmd.o 00:03:52.492 CC examples/thread/thread/thread_ex.o 00:03:52.751 LINK led 00:03:52.751 LINK lsvmd 00:03:52.751 LINK hello_sock 00:03:52.751 LINK vhost 00:03:52.751 LINK idxd_perf 00:03:52.751 LINK memory_ut 00:03:52.751 LINK thread 00:03:52.751 LINK spdk_lock 00:03:53.321 LINK iscsi_fuzz 00:03:53.321 CC examples/nvme/hotplug/hotplug.o 00:03:53.321 CC examples/nvme/reconnect/reconnect.o 00:03:53.321 CC examples/nvme/arbitration/arbitration.o 00:03:53.321 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:53.321 CC examples/nvme/hello_world/hello_world.o 00:03:53.321 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:53.321 CC examples/nvme/abort/abort.o 00:03:53.580 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:53.580 CC test/event/event_perf/event_perf.o 00:03:53.580 CC test/event/reactor/reactor.o 00:03:53.580 CC test/event/reactor_perf/reactor_perf.o 00:03:53.580 CC test/event/app_repeat/app_repeat.o 00:03:53.580 CC test/event/scheduler/scheduler.o 00:03:53.580 LINK hotplug 00:03:53.580 LINK pmr_persistence 00:03:53.580 LINK cmb_copy 00:03:53.580 LINK hello_world 00:03:53.580 LINK event_perf 00:03:53.580 LINK reconnect 00:03:53.580 LINK reactor 00:03:53.580 LINK reactor_perf 00:03:53.580 LINK arbitration 00:03:53.580 LINK abort 00:03:53.839 LINK app_repeat 00:03:53.839 LINK nvme_manage 00:03:53.839 LINK scheduler 00:03:54.097 CC test/nvme/fused_ordering/fused_ordering.o 00:03:54.097 CC test/nvme/e2edp/nvme_dp.o 00:03:54.097 CC test/nvme/simple_copy/simple_copy.o 00:03:54.097 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:54.097 CC test/nvme/connect_stress/connect_stress.o 00:03:54.097 CC test/nvme/aer/aer.o 00:03:54.097 CC test/nvme/startup/startup.o 00:03:54.097 CC test/nvme/compliance/nvme_compliance.o 00:03:54.097 CC test/nvme/sgl/sgl.o 00:03:54.097 CC test/nvme/reserve/reserve.o 00:03:54.097 CC test/nvme/err_injection/err_injection.o 00:03:54.097 CC test/nvme/overhead/overhead.o 00:03:54.097 CC test/nvme/fdp/fdp.o 00:03:54.097 CC test/nvme/cuse/cuse.o 00:03:54.097 CC test/nvme/reset/reset.o 00:03:54.097 CC test/nvme/boot_partition/boot_partition.o 00:03:54.097 CC test/blobfs/mkfs/mkfs.o 00:03:54.097 CC test/accel/dif/dif.o 00:03:54.097 CC test/lvol/esnap/esnap.o 00:03:54.097 LINK startup 00:03:54.097 LINK connect_stress 00:03:54.097 LINK fused_ordering 00:03:54.097 LINK doorbell_aers 00:03:54.097 LINK reserve 00:03:54.097 LINK err_injection 00:03:54.097 LINK simple_copy 00:03:54.098 LINK nvme_dp 00:03:54.098 LINK overhead 00:03:54.098 LINK fdp 00:03:54.098 LINK mkfs 00:03:54.356 LINK sgl 00:03:54.356 LINK aer 00:03:54.356 LINK boot_partition 00:03:54.356 LINK reset 00:03:54.356 LINK nvme_compliance 00:03:54.356 CC examples/accel/perf/accel_perf.o 00:03:54.356 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:54.614 CC examples/blob/cli/blobcli.o 00:03:54.614 CC examples/blob/hello_world/hello_blob.o 00:03:54.614 LINK dif 00:03:54.614 LINK hello_fsdev 00:03:54.614 LINK hello_blob 00:03:54.873 LINK accel_perf 00:03:54.873 LINK blobcli 00:03:54.873 LINK cuse 00:03:55.811 CC examples/bdev/hello_world/hello_bdev.o 00:03:55.811 CC examples/bdev/bdevperf/bdevperf.o 00:03:55.811 LINK hello_bdev 00:03:56.070 LINK bdevperf 00:03:56.070 CC test/bdev/bdevio/bdevio.o 00:03:56.328 LINK bdevio 00:03:57.707 LINK esnap 00:03:57.707 CC examples/nvmf/nvmf/nvmf.o 00:03:57.707 LINK nvmf 00:03:59.087 00:03:59.087 real 0m46.806s 00:03:59.087 user 6m58.898s 00:03:59.087 sys 2m21.007s 00:03:59.087 15:36:54 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:59.087 15:36:54 make -- common/autotest_common.sh@10 -- $ set +x 00:03:59.087 ************************************ 00:03:59.087 END TEST make 00:03:59.087 ************************************ 00:03:59.087 15:36:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:59.087 15:36:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:59.087 15:36:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:59.087 15:36:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.087 15:36:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:59.087 15:36:54 -- pm/common@44 -- $ pid=793523 00:03:59.087 15:36:54 -- pm/common@50 -- $ kill -TERM 793523 00:03:59.087 15:36:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.087 15:36:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:59.087 15:36:54 -- pm/common@44 -- $ pid=793525 00:03:59.087 15:36:54 -- pm/common@50 -- $ kill -TERM 793525 00:03:59.087 15:36:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.087 15:36:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:59.087 15:36:54 -- pm/common@44 -- $ pid=793528 00:03:59.087 15:36:54 -- pm/common@50 -- $ kill -TERM 793528 00:03:59.087 15:36:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.087 15:36:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:59.087 15:36:54 -- pm/common@44 -- $ pid=793550 00:03:59.087 15:36:54 -- pm/common@50 -- $ sudo -E kill -TERM 793550 00:03:59.087 15:36:54 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:59.087 15:36:54 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:03:59.347 15:36:54 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:59.347 15:36:54 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:59.347 15:36:54 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:59.347 15:36:54 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:59.347 15:36:54 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.347 15:36:54 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.347 15:36:54 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.347 15:36:54 -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.347 15:36:54 -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.347 15:36:54 -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.347 15:36:54 -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.347 15:36:54 -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.347 15:36:54 -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.347 15:36:54 -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.347 15:36:54 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.347 15:36:54 -- scripts/common.sh@344 -- # case "$op" in 00:03:59.347 15:36:54 -- scripts/common.sh@345 -- # : 1 00:03:59.347 15:36:54 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.347 15:36:54 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.347 15:36:54 -- scripts/common.sh@365 -- # decimal 1 00:03:59.347 15:36:54 -- scripts/common.sh@353 -- # local d=1 00:03:59.347 15:36:54 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.347 15:36:54 -- scripts/common.sh@355 -- # echo 1 00:03:59.347 15:36:54 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.347 15:36:54 -- scripts/common.sh@366 -- # decimal 2 00:03:59.347 15:36:54 -- scripts/common.sh@353 -- # local d=2 00:03:59.347 15:36:54 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.347 15:36:54 -- scripts/common.sh@355 -- # echo 2 00:03:59.347 15:36:54 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.347 15:36:54 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.347 15:36:54 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.347 15:36:54 -- scripts/common.sh@368 -- # return 0 00:03:59.347 15:36:54 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.347 15:36:54 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:59.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.347 --rc genhtml_branch_coverage=1 00:03:59.347 --rc genhtml_function_coverage=1 00:03:59.347 --rc genhtml_legend=1 00:03:59.347 --rc geninfo_all_blocks=1 00:03:59.347 --rc geninfo_unexecuted_blocks=1 00:03:59.347 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:59.347 ' 00:03:59.347 15:36:54 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:59.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.347 --rc genhtml_branch_coverage=1 00:03:59.347 --rc genhtml_function_coverage=1 00:03:59.347 --rc genhtml_legend=1 00:03:59.347 --rc geninfo_all_blocks=1 00:03:59.347 --rc geninfo_unexecuted_blocks=1 00:03:59.347 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:59.347 ' 00:03:59.347 15:36:54 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:59.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.347 --rc genhtml_branch_coverage=1 00:03:59.347 --rc genhtml_function_coverage=1 00:03:59.347 --rc genhtml_legend=1 00:03:59.347 --rc geninfo_all_blocks=1 00:03:59.347 --rc geninfo_unexecuted_blocks=1 00:03:59.347 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:59.347 ' 00:03:59.347 15:36:54 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:59.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.347 --rc genhtml_branch_coverage=1 00:03:59.347 --rc genhtml_function_coverage=1 00:03:59.347 --rc genhtml_legend=1 00:03:59.347 --rc geninfo_all_blocks=1 00:03:59.347 --rc geninfo_unexecuted_blocks=1 00:03:59.347 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:59.347 ' 00:03:59.347 15:36:54 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:59.347 15:36:54 -- nvmf/common.sh@7 -- # uname -s 00:03:59.347 15:36:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:59.347 15:36:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:59.347 15:36:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:59.347 15:36:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:59.347 15:36:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:59.347 15:36:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:59.347 15:36:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:59.347 15:36:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:59.347 15:36:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:59.347 15:36:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:59.347 15:36:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:03:59.347 15:36:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:03:59.347 15:36:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:59.347 15:36:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:59.347 15:36:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:59.347 15:36:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:59.347 15:36:54 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:59.347 15:36:54 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:59.347 15:36:54 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:59.347 15:36:54 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:59.347 15:36:54 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:59.347 15:36:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.347 15:36:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.347 15:36:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.347 15:36:54 -- paths/export.sh@5 -- # export PATH 00:03:59.347 15:36:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.347 15:36:54 -- nvmf/common.sh@51 -- # : 0 00:03:59.347 15:36:54 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:59.347 15:36:54 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:59.347 15:36:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:59.347 15:36:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:59.348 15:36:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:59.348 15:36:54 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:59.348 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:59.348 15:36:54 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:59.348 15:36:54 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:59.348 15:36:54 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:59.348 15:36:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:59.348 15:36:54 -- spdk/autotest.sh@32 -- # uname -s 00:03:59.348 15:36:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:59.348 15:36:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:59.348 15:36:54 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:59.348 15:36:54 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:59.348 15:36:54 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:59.348 15:36:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:59.348 15:36:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:59.348 15:36:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:59.348 15:36:54 -- spdk/autotest.sh@48 -- # udevadm_pid=854375 00:03:59.348 15:36:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:59.348 15:36:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:59.348 15:36:54 -- pm/common@17 -- # local monitor 00:03:59.348 15:36:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.348 15:36:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.348 15:36:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.348 15:36:54 -- pm/common@21 -- # date +%s 00:03:59.348 15:36:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.348 15:36:54 -- pm/common@21 -- # date +%s 00:03:59.348 15:36:54 -- pm/common@25 -- # sleep 1 00:03:59.348 15:36:54 -- pm/common@21 -- # date +%s 00:03:59.348 15:36:54 -- pm/common@21 -- # date +%s 00:03:59.348 15:36:54 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733755014 00:03:59.348 15:36:54 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733755014 00:03:59.348 15:36:54 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733755014 00:03:59.348 15:36:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733755014 00:03:59.348 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733755014_collect-cpu-load.pm.log 00:03:59.348 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733755014_collect-vmstat.pm.log 00:03:59.348 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733755014_collect-cpu-temp.pm.log 00:03:59.348 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733755014_collect-bmc-pm.bmc.pm.log 00:04:00.287 15:36:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:00.287 15:36:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:00.287 15:36:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.287 15:36:55 -- common/autotest_common.sh@10 -- # set +x 00:04:00.287 15:36:55 -- spdk/autotest.sh@59 -- # create_test_list 00:04:00.287 15:36:55 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:00.287 15:36:55 -- common/autotest_common.sh@10 -- # set +x 00:04:00.547 15:36:55 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:04:00.547 15:36:55 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:00.547 15:36:55 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:00.547 15:36:55 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:04:00.547 15:36:55 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:00.547 15:36:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:00.547 15:36:55 -- common/autotest_common.sh@1457 -- # uname 00:04:00.547 15:36:55 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:00.547 15:36:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:00.547 15:36:55 -- common/autotest_common.sh@1477 -- # uname 00:04:00.547 15:36:55 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:00.547 15:36:55 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:00.547 15:36:55 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:04:00.547 lcov: LCOV version 1.15 00:04:00.547 15:36:55 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:13.710 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:04:16.247 15:37:11 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:16.247 15:37:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.247 15:37:11 -- common/autotest_common.sh@10 -- # set +x 00:04:16.247 15:37:11 -- spdk/autotest.sh@78 -- # rm -f 00:04:16.247 15:37:11 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.785 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:18.785 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:18.785 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:18.785 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:18.785 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:18.785 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:18.785 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:19.044 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:19.044 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:19.044 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:19.044 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:19.044 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:19.044 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:19.044 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:19.044 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:19.044 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:19.044 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:19.304 15:37:14 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:19.304 15:37:14 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:19.304 15:37:14 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:19.304 15:37:14 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:19.304 15:37:14 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:19.304 15:37:14 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:19.304 15:37:14 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:19.304 15:37:14 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:19.304 15:37:14 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:19.304 15:37:14 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:19.304 15:37:14 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:19.304 15:37:14 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.304 15:37:14 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.304 15:37:14 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:19.304 15:37:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.304 15:37:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.304 15:37:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:19.304 15:37:14 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:19.304 15:37:14 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:19.304 No valid GPT data, bailing 00:04:19.304 15:37:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:19.304 15:37:14 -- scripts/common.sh@394 -- # pt= 00:04:19.304 15:37:14 -- scripts/common.sh@395 -- # return 1 00:04:19.304 15:37:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:19.304 1+0 records in 00:04:19.304 1+0 records out 00:04:19.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00667549 s, 157 MB/s 00:04:19.304 15:37:14 -- spdk/autotest.sh@105 -- # sync 00:04:19.304 15:37:14 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:19.304 15:37:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:19.304 15:37:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:23.502 15:37:18 -- spdk/autotest.sh@111 -- # uname -s 00:04:23.502 15:37:18 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:23.502 15:37:18 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:04:23.502 15:37:18 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:04:23.502 15:37:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.502 15:37:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.502 15:37:18 -- common/autotest_common.sh@10 -- # set +x 00:04:23.502 ************************************ 00:04:23.502 START TEST setup.sh 00:04:23.502 ************************************ 00:04:23.502 15:37:18 setup.sh -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:04:23.502 * Looking for test storage... 00:04:23.502 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:23.503 15:37:18 setup.sh -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:23.503 15:37:18 setup.sh -- common/autotest_common.sh@1711 -- # lcov --version 00:04:23.503 15:37:18 setup.sh -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:23.503 15:37:18 setup.sh -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@345 -- # : 1 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@353 -- # local d=1 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@355 -- # echo 1 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@353 -- # local d=2 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@355 -- # echo 2 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.503 15:37:18 setup.sh -- scripts/common.sh@368 -- # return 0 00:04:23.503 15:37:18 setup.sh -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.503 15:37:18 setup.sh -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:23.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.503 --rc genhtml_branch_coverage=1 00:04:23.503 --rc genhtml_function_coverage=1 00:04:23.503 --rc genhtml_legend=1 00:04:23.503 --rc geninfo_all_blocks=1 00:04:23.503 --rc geninfo_unexecuted_blocks=1 00:04:23.503 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:23.503 ' 00:04:23.503 15:37:18 setup.sh -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:23.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.503 --rc genhtml_branch_coverage=1 00:04:23.503 --rc genhtml_function_coverage=1 00:04:23.503 --rc genhtml_legend=1 00:04:23.503 --rc geninfo_all_blocks=1 00:04:23.503 --rc geninfo_unexecuted_blocks=1 00:04:23.503 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:23.503 ' 00:04:23.503 15:37:18 setup.sh -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:23.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.503 --rc genhtml_branch_coverage=1 00:04:23.503 --rc genhtml_function_coverage=1 00:04:23.503 --rc genhtml_legend=1 00:04:23.503 --rc geninfo_all_blocks=1 00:04:23.503 --rc geninfo_unexecuted_blocks=1 00:04:23.503 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:23.503 ' 00:04:23.503 15:37:18 setup.sh -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:23.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.503 --rc genhtml_branch_coverage=1 00:04:23.503 --rc genhtml_function_coverage=1 00:04:23.503 --rc genhtml_legend=1 00:04:23.503 --rc geninfo_all_blocks=1 00:04:23.503 --rc geninfo_unexecuted_blocks=1 00:04:23.503 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:23.503 ' 00:04:23.503 15:37:18 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:23.503 15:37:18 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:23.503 15:37:18 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:04:23.503 15:37:18 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.503 15:37:18 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.503 15:37:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:23.503 ************************************ 00:04:23.503 START TEST acl 00:04:23.503 ************************************ 00:04:23.503 15:37:18 setup.sh.acl -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:04:23.503 * Looking for test storage... 00:04:23.763 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:23.763 15:37:18 setup.sh.acl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:23.763 15:37:18 setup.sh.acl -- common/autotest_common.sh@1711 -- # lcov --version 00:04:23.763 15:37:18 setup.sh.acl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:23.763 15:37:18 setup.sh.acl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:04:23.763 15:37:18 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.764 15:37:18 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:04:23.764 15:37:18 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:04:23.764 15:37:18 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.764 15:37:18 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:04:23.764 15:37:18 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.764 15:37:18 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.764 15:37:18 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.764 15:37:18 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:04:23.764 15:37:18 setup.sh.acl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.764 15:37:18 setup.sh.acl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:23.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.764 --rc genhtml_branch_coverage=1 00:04:23.764 --rc genhtml_function_coverage=1 00:04:23.764 --rc genhtml_legend=1 00:04:23.764 --rc geninfo_all_blocks=1 00:04:23.764 --rc geninfo_unexecuted_blocks=1 00:04:23.764 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:23.764 ' 00:04:23.764 15:37:18 setup.sh.acl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:23.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.764 --rc genhtml_branch_coverage=1 00:04:23.764 --rc genhtml_function_coverage=1 00:04:23.764 --rc genhtml_legend=1 00:04:23.764 --rc geninfo_all_blocks=1 00:04:23.764 --rc geninfo_unexecuted_blocks=1 00:04:23.764 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:23.764 ' 00:04:23.764 15:37:18 setup.sh.acl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:23.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.764 --rc genhtml_branch_coverage=1 00:04:23.764 --rc genhtml_function_coverage=1 00:04:23.764 --rc genhtml_legend=1 00:04:23.764 --rc geninfo_all_blocks=1 00:04:23.764 --rc geninfo_unexecuted_blocks=1 00:04:23.764 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:23.764 ' 00:04:23.764 15:37:18 setup.sh.acl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:23.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.764 --rc genhtml_branch_coverage=1 00:04:23.764 --rc genhtml_function_coverage=1 00:04:23.764 --rc genhtml_legend=1 00:04:23.764 --rc geninfo_all_blocks=1 00:04:23.764 --rc geninfo_unexecuted_blocks=1 00:04:23.764 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:23.764 ' 00:04:23.764 15:37:18 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:23.764 15:37:18 setup.sh.acl -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:23.764 15:37:18 setup.sh.acl -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:23.764 15:37:18 setup.sh.acl -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:23.764 15:37:18 setup.sh.acl -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:23.764 15:37:18 setup.sh.acl -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:23.764 15:37:18 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:23.764 15:37:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:23.764 15:37:18 setup.sh.acl -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:23.764 15:37:18 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:23.764 15:37:18 setup.sh.acl -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:23.764 15:37:18 setup.sh.acl -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.764 15:37:18 setup.sh.acl -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:23.764 15:37:18 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:23.764 15:37:18 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:23.764 15:37:18 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:23.764 15:37:18 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:23.764 15:37:18 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:23.764 15:37:18 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.764 15:37:18 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.060 15:37:22 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:27.060 15:37:22 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:27.060 15:37:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.060 15:37:22 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:27.060 15:37:22 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.060 15:37:22 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:30.429 Hugepages 00:04:30.429 node hugesize free / total 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.429 00:04:30.429 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.429 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:30.430 15:37:25 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:30.430 15:37:25 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.430 15:37:25 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.430 15:37:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:30.430 ************************************ 00:04:30.430 START TEST denied 00:04:30.430 ************************************ 00:04:30.430 15:37:25 setup.sh.acl.denied -- common/autotest_common.sh@1129 -- # denied 00:04:30.430 15:37:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:04:30.430 15:37:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:30.430 15:37:25 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:04:30.430 15:37:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.430 15:37:25 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:33.737 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:04:33.737 15:37:28 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:04:33.737 15:37:28 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:33.737 15:37:28 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:33.737 15:37:28 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:04:33.737 15:37:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:04:33.737 15:37:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:33.737 15:37:28 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:33.737 15:37:28 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:33.737 15:37:28 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.737 15:37:28 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:37.938 00:04:37.938 real 0m7.245s 00:04:37.938 user 0m2.280s 00:04:37.938 sys 0m4.303s 00:04:37.938 15:37:32 setup.sh.acl.denied -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.938 15:37:32 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:37.938 ************************************ 00:04:37.938 END TEST denied 00:04:37.938 ************************************ 00:04:37.938 15:37:32 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:37.938 15:37:32 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.938 15:37:32 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.938 15:37:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:37.938 ************************************ 00:04:37.938 START TEST allowed 00:04:37.938 ************************************ 00:04:37.938 15:37:32 setup.sh.acl.allowed -- common/autotest_common.sh@1129 -- # allowed 00:04:37.938 15:37:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:04:37.938 15:37:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:37.938 15:37:32 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:04:37.938 15:37:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.938 15:37:32 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:44.517 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:44.517 15:37:38 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:44.517 15:37:38 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:44.517 15:37:38 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:44.517 15:37:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:44.517 15:37:38 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.061 00:04:47.061 real 0m9.228s 00:04:47.061 user 0m1.957s 00:04:47.061 sys 0m3.963s 00:04:47.061 15:37:41 setup.sh.acl.allowed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.061 15:37:41 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:47.061 ************************************ 00:04:47.061 END TEST allowed 00:04:47.061 ************************************ 00:04:47.061 00:04:47.061 real 0m23.282s 00:04:47.061 user 0m6.765s 00:04:47.061 sys 0m12.797s 00:04:47.061 15:37:41 setup.sh.acl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.061 15:37:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:47.061 ************************************ 00:04:47.061 END TEST acl 00:04:47.061 ************************************ 00:04:47.061 15:37:41 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:47.061 15:37:41 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.061 15:37:41 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.061 15:37:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.061 ************************************ 00:04:47.061 START TEST hugepages 00:04:47.061 ************************************ 00:04:47.061 15:37:41 setup.sh.hugepages -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:47.061 * Looking for test storage... 00:04:47.061 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:47.061 15:37:42 setup.sh.hugepages -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.061 15:37:42 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.061 15:37:42 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.061 15:37:42 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.061 15:37:42 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:04:47.061 15:37:42 setup.sh.hugepages -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.061 15:37:42 setup.sh.hugepages -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.061 --rc genhtml_branch_coverage=1 00:04:47.061 --rc genhtml_function_coverage=1 00:04:47.061 --rc genhtml_legend=1 00:04:47.061 --rc geninfo_all_blocks=1 00:04:47.061 --rc geninfo_unexecuted_blocks=1 00:04:47.061 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:47.061 ' 00:04:47.061 15:37:42 setup.sh.hugepages -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.061 --rc genhtml_branch_coverage=1 00:04:47.061 --rc genhtml_function_coverage=1 00:04:47.061 --rc genhtml_legend=1 00:04:47.061 --rc geninfo_all_blocks=1 00:04:47.061 --rc geninfo_unexecuted_blocks=1 00:04:47.061 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:47.061 ' 00:04:47.061 15:37:42 setup.sh.hugepages -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.061 --rc genhtml_branch_coverage=1 00:04:47.061 --rc genhtml_function_coverage=1 00:04:47.061 --rc genhtml_legend=1 00:04:47.061 --rc geninfo_all_blocks=1 00:04:47.061 --rc geninfo_unexecuted_blocks=1 00:04:47.061 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:47.061 ' 00:04:47.061 15:37:42 setup.sh.hugepages -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.061 --rc genhtml_branch_coverage=1 00:04:47.061 --rc genhtml_function_coverage=1 00:04:47.061 --rc genhtml_legend=1 00:04:47.061 --rc geninfo_all_blocks=1 00:04:47.061 --rc geninfo_unexecuted_blocks=1 00:04:47.061 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:47.061 ' 00:04:47.061 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:47.061 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:47.061 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:47.061 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:47.061 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:47.061 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:47.061 15:37:42 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:47.061 15:37:42 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 73850524 kB' 'MemAvailable: 77616924 kB' 'Buffers: 9308 kB' 'Cached: 11790536 kB' 'SwapCached: 0 kB' 'Active: 8625544 kB' 'Inactive: 3709736 kB' 'Active(anon): 8131268 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538804 kB' 'Mapped: 178552 kB' 'Shmem: 7595832 kB' 'KReclaimable: 398324 kB' 'Slab: 868668 kB' 'SReclaimable: 398324 kB' 'SUnreclaim: 470344 kB' 'KernelStack: 15984 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434212 kB' 'Committed_AS: 9427508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198704 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.062 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:47.063 15:37:42 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:04:47.063 15:37:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.063 15:37:42 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.064 15:37:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:47.064 ************************************ 00:04:47.064 START TEST single_node_setup 00:04:47.064 ************************************ 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1129 -- # single_node_setup 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.064 15:37:42 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:50.362 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:50.362 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:50.362 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:50.362 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:50.362 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:50.362 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:50.362 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:50.362 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:50.362 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:50.362 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:50.362 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:50.362 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:50.362 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:50.362 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:50.362 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:50.362 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:53.666 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76078084 kB' 'MemAvailable: 79844324 kB' 'Buffers: 9308 kB' 'Cached: 11790668 kB' 'SwapCached: 0 kB' 'Active: 8628344 kB' 'Inactive: 3709736 kB' 'Active(anon): 8134068 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541436 kB' 'Mapped: 178144 kB' 'Shmem: 7595964 kB' 'KReclaimable: 398164 kB' 'Slab: 867812 kB' 'SReclaimable: 398164 kB' 'SUnreclaim: 469648 kB' 'KernelStack: 16112 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9431700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198992 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.666 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76084072 kB' 'MemAvailable: 79850312 kB' 'Buffers: 9308 kB' 'Cached: 11790668 kB' 'SwapCached: 0 kB' 'Active: 8629508 kB' 'Inactive: 3709736 kB' 'Active(anon): 8135232 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542536 kB' 'Mapped: 178140 kB' 'Shmem: 7595964 kB' 'KReclaimable: 398164 kB' 'Slab: 867860 kB' 'SReclaimable: 398164 kB' 'SUnreclaim: 469696 kB' 'KernelStack: 16320 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9431348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198944 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.667 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.668 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76083680 kB' 'MemAvailable: 79849920 kB' 'Buffers: 9308 kB' 'Cached: 11790692 kB' 'SwapCached: 0 kB' 'Active: 8629188 kB' 'Inactive: 3709736 kB' 'Active(anon): 8134912 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542208 kB' 'Mapped: 178140 kB' 'Shmem: 7595988 kB' 'KReclaimable: 398164 kB' 'Slab: 867756 kB' 'SReclaimable: 398164 kB' 'SUnreclaim: 469592 kB' 'KernelStack: 16224 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9431376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198992 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.669 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.670 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:53.671 nr_hugepages=1024 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:53.671 resv_hugepages=0 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:53.671 surplus_hugepages=0 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:53.671 anon_hugepages=0 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.671 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76084140 kB' 'MemAvailable: 79850380 kB' 'Buffers: 9308 kB' 'Cached: 11790712 kB' 'SwapCached: 0 kB' 'Active: 8628224 kB' 'Inactive: 3709736 kB' 'Active(anon): 8133948 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541220 kB' 'Mapped: 178140 kB' 'Shmem: 7596008 kB' 'KReclaimable: 398164 kB' 'Slab: 867436 kB' 'SReclaimable: 398164 kB' 'SUnreclaim: 469272 kB' 'KernelStack: 16176 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9431528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198976 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.672 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 41968856 kB' 'MemUsed: 6096080 kB' 'SwapCached: 0 kB' 'Active: 2985368 kB' 'Inactive: 130664 kB' 'Active(anon): 2726196 kB' 'Inactive(anon): 0 kB' 'Active(file): 259172 kB' 'Inactive(file): 130664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2843844 kB' 'Mapped: 139684 kB' 'AnonPages: 275392 kB' 'Shmem: 2454008 kB' 'KernelStack: 8760 kB' 'PageTables: 4652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107500 kB' 'Slab: 369820 kB' 'SReclaimable: 107500 kB' 'SUnreclaim: 262320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.673 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.674 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:53.935 node0=1024 expecting 1024 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:53.935 00:04:53.935 real 0m6.633s 00:04:53.935 user 0m1.388s 00:04:53.935 sys 0m2.175s 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.935 15:37:48 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:04:53.935 ************************************ 00:04:53.935 END TEST single_node_setup 00:04:53.935 ************************************ 00:04:53.936 15:37:48 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:04:53.936 15:37:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.936 15:37:48 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.936 15:37:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:53.936 ************************************ 00:04:53.936 START TEST even_2G_alloc 00:04:53.936 ************************************ 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1129 -- # even_2G_alloc 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.936 15:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:57.279 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:57.279 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:57.279 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:57.279 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:57.279 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:57.279 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:57.279 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:57.279 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:57.279 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:57.279 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:57.279 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:57.279 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:57.279 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:57.279 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:57.279 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:57.279 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:57.279 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:57.279 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:04:57.279 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:04:57.279 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:57.279 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:57.279 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:57.279 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:57.279 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:57.279 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.279 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:57.279 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.279 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:57.279 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:57.279 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.279 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.279 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76100588 kB' 'MemAvailable: 79866812 kB' 'Buffers: 9308 kB' 'Cached: 11790812 kB' 'SwapCached: 0 kB' 'Active: 8627740 kB' 'Inactive: 3709736 kB' 'Active(anon): 8133464 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540724 kB' 'Mapped: 178240 kB' 'Shmem: 7596108 kB' 'KReclaimable: 398148 kB' 'Slab: 868128 kB' 'SReclaimable: 398148 kB' 'SUnreclaim: 469980 kB' 'KernelStack: 15968 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9430388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198880 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.280 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.281 15:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.281 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.281 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76101808 kB' 'MemAvailable: 79868032 kB' 'Buffers: 9308 kB' 'Cached: 11790816 kB' 'SwapCached: 0 kB' 'Active: 8627332 kB' 'Inactive: 3709736 kB' 'Active(anon): 8133056 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540292 kB' 'Mapped: 178116 kB' 'Shmem: 7596112 kB' 'KReclaimable: 398148 kB' 'Slab: 868096 kB' 'SReclaimable: 398148 kB' 'SUnreclaim: 469948 kB' 'KernelStack: 15952 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9430404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198880 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:04:57.281 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.281 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.281 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.281 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.281 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.281 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.288 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.289 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76102620 kB' 'MemAvailable: 79868844 kB' 'Buffers: 9308 kB' 'Cached: 11790832 kB' 'SwapCached: 0 kB' 'Active: 8627260 kB' 'Inactive: 3709736 kB' 'Active(anon): 8132984 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540172 kB' 'Mapped: 178116 kB' 'Shmem: 7596128 kB' 'KReclaimable: 398148 kB' 'Slab: 868096 kB' 'SReclaimable: 398148 kB' 'SUnreclaim: 469948 kB' 'KernelStack: 15936 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9430056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198880 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.290 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.291 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:57.292 nr_hugepages=1024 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:57.292 resv_hugepages=0 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:57.292 surplus_hugepages=0 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:57.292 anon_hugepages=0 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.292 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76101708 kB' 'MemAvailable: 79867932 kB' 'Buffers: 9308 kB' 'Cached: 11790856 kB' 'SwapCached: 0 kB' 'Active: 8627364 kB' 'Inactive: 3709736 kB' 'Active(anon): 8133088 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540232 kB' 'Mapped: 178116 kB' 'Shmem: 7596152 kB' 'KReclaimable: 398148 kB' 'Slab: 868096 kB' 'SReclaimable: 398148 kB' 'SUnreclaim: 469948 kB' 'KernelStack: 15920 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9430084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198848 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.293 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 43018848 kB' 'MemUsed: 5046088 kB' 'SwapCached: 0 kB' 'Active: 2983152 kB' 'Inactive: 130664 kB' 'Active(anon): 2723980 kB' 'Inactive(anon): 0 kB' 'Active(file): 259172 kB' 'Inactive(file): 130664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2843864 kB' 'Mapped: 139696 kB' 'AnonPages: 273124 kB' 'Shmem: 2454028 kB' 'KernelStack: 8568 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107500 kB' 'Slab: 370176 kB' 'SReclaimable: 107500 kB' 'SUnreclaim: 262676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.294 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.295 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220584 kB' 'MemFree: 33082608 kB' 'MemUsed: 11137976 kB' 'SwapCached: 0 kB' 'Active: 5643596 kB' 'Inactive: 3579072 kB' 'Active(anon): 5408492 kB' 'Inactive(anon): 0 kB' 'Active(file): 235104 kB' 'Inactive(file): 3579072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8956356 kB' 'Mapped: 38420 kB' 'AnonPages: 266408 kB' 'Shmem: 5142180 kB' 'KernelStack: 7320 kB' 'PageTables: 3360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 290648 kB' 'Slab: 497920 kB' 'SReclaimable: 290648 kB' 'SUnreclaim: 207272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.296 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:57.297 node0=512 expecting 512 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:57.297 node1=512 expecting 512 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:04:57.297 00:04:57.297 real 0m3.177s 00:04:57.297 user 0m1.241s 00:04:57.297 sys 0m1.996s 00:04:57.297 15:37:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.298 15:37:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:57.298 ************************************ 00:04:57.298 END TEST even_2G_alloc 00:04:57.298 ************************************ 00:04:57.298 15:37:52 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:04:57.298 15:37:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.298 15:37:52 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.298 15:37:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:57.298 ************************************ 00:04:57.298 START TEST odd_alloc 00:04:57.298 ************************************ 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1129 -- # odd_alloc 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.298 15:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:59.851 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:59.851 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:59.851 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:59.851 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:59.851 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:59.851 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:59.851 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:59.851 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:59.851 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:59.851 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:59.851 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:59.851 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:59.851 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:59.851 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:59.851 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:59.851 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:59.851 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76136196 kB' 'MemAvailable: 79902420 kB' 'Buffers: 9308 kB' 'Cached: 11790972 kB' 'SwapCached: 0 kB' 'Active: 8626240 kB' 'Inactive: 3709736 kB' 'Active(anon): 8131964 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539000 kB' 'Mapped: 177316 kB' 'Shmem: 7596268 kB' 'KReclaimable: 398148 kB' 'Slab: 868040 kB' 'SReclaimable: 398148 kB' 'SUnreclaim: 469892 kB' 'KernelStack: 15888 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481764 kB' 'Committed_AS: 9423336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198848 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.115 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76137152 kB' 'MemAvailable: 79903376 kB' 'Buffers: 9308 kB' 'Cached: 11790972 kB' 'SwapCached: 0 kB' 'Active: 8626744 kB' 'Inactive: 3709736 kB' 'Active(anon): 8132468 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539536 kB' 'Mapped: 177352 kB' 'Shmem: 7596268 kB' 'KReclaimable: 398148 kB' 'Slab: 868076 kB' 'SReclaimable: 398148 kB' 'SUnreclaim: 469928 kB' 'KernelStack: 15872 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481764 kB' 'Committed_AS: 9424836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198800 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.116 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76139600 kB' 'MemAvailable: 79905824 kB' 'Buffers: 9308 kB' 'Cached: 11790992 kB' 'SwapCached: 0 kB' 'Active: 8627660 kB' 'Inactive: 3709736 kB' 'Active(anon): 8133384 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540584 kB' 'Mapped: 177352 kB' 'Shmem: 7596288 kB' 'KReclaimable: 398148 kB' 'Slab: 868076 kB' 'SReclaimable: 398148 kB' 'SUnreclaim: 469928 kB' 'KernelStack: 15904 kB' 'PageTables: 7992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481764 kB' 'Committed_AS: 9425960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198912 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.117 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:05:00.118 nr_hugepages=1025 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:05:00.118 resv_hugepages=0 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:05:00.118 surplus_hugepages=0 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:05:00.118 anon_hugepages=0 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76139356 kB' 'MemAvailable: 79905580 kB' 'Buffers: 9308 kB' 'Cached: 11791012 kB' 'SwapCached: 0 kB' 'Active: 8627380 kB' 'Inactive: 3709736 kB' 'Active(anon): 8133104 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540120 kB' 'Mapped: 177352 kB' 'Shmem: 7596308 kB' 'KReclaimable: 398148 kB' 'Slab: 868076 kB' 'SReclaimable: 398148 kB' 'SUnreclaim: 469928 kB' 'KernelStack: 16128 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481764 kB' 'Committed_AS: 9424504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198976 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.118 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 43049660 kB' 'MemUsed: 5015276 kB' 'SwapCached: 0 kB' 'Active: 2983584 kB' 'Inactive: 130664 kB' 'Active(anon): 2724412 kB' 'Inactive(anon): 0 kB' 'Active(file): 259172 kB' 'Inactive(file): 130664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2843896 kB' 'Mapped: 139244 kB' 'AnonPages: 273036 kB' 'Shmem: 2454060 kB' 'KernelStack: 8696 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107500 kB' 'Slab: 370256 kB' 'SReclaimable: 107500 kB' 'SUnreclaim: 262756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.119 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.381 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.381 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.381 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.381 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.381 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.381 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.382 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220584 kB' 'MemFree: 33089192 kB' 'MemUsed: 11131392 kB' 'SwapCached: 0 kB' 'Active: 5643820 kB' 'Inactive: 3579072 kB' 'Active(anon): 5408716 kB' 'Inactive(anon): 0 kB' 'Active(file): 235104 kB' 'Inactive(file): 3579072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8956468 kB' 'Mapped: 38108 kB' 'AnonPages: 266536 kB' 'Shmem: 5142292 kB' 'KernelStack: 7320 kB' 'PageTables: 3560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 290648 kB' 'Slab: 497820 kB' 'SReclaimable: 290648 kB' 'SUnreclaim: 207172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.383 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:05:00.384 node0=513 expecting 513 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:05:00.384 node1=512 expecting 512 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:00.384 00:05:00.384 real 0m3.166s 00:05:00.384 user 0m1.206s 00:05:00.384 sys 0m2.010s 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.384 15:37:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:00.384 ************************************ 00:05:00.384 END TEST odd_alloc 00:05:00.384 ************************************ 00:05:00.384 15:37:55 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:05:00.384 15:37:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.384 15:37:55 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.384 15:37:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:00.384 ************************************ 00:05:00.384 START TEST custom_alloc 00:05:00.384 ************************************ 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1129 -- # custom_alloc 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:05:00.384 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.385 15:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:03.689 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:03.689 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:03.689 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:03.689 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:03.689 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:03.689 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:03.689 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:03.689 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:03.689 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:03.689 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:03.689 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:03.689 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:03.689 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:03.689 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:03.689 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:03.689 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:03.689 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 75086488 kB' 'MemAvailable: 78852712 kB' 'Buffers: 9308 kB' 'Cached: 11791124 kB' 'SwapCached: 0 kB' 'Active: 8627452 kB' 'Inactive: 3709736 kB' 'Active(anon): 8133176 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540060 kB' 'Mapped: 177436 kB' 'Shmem: 7596420 kB' 'KReclaimable: 398148 kB' 'Slab: 867908 kB' 'SReclaimable: 398148 kB' 'SUnreclaim: 469760 kB' 'KernelStack: 15888 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958500 kB' 'Committed_AS: 9423880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198944 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.689 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.690 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 75086876 kB' 'MemAvailable: 78853100 kB' 'Buffers: 9308 kB' 'Cached: 11791128 kB' 'SwapCached: 0 kB' 'Active: 8627712 kB' 'Inactive: 3709736 kB' 'Active(anon): 8133436 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540292 kB' 'Mapped: 177316 kB' 'Shmem: 7596424 kB' 'KReclaimable: 398148 kB' 'Slab: 867888 kB' 'SReclaimable: 398148 kB' 'SUnreclaim: 469740 kB' 'KernelStack: 15888 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958500 kB' 'Committed_AS: 9423900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198944 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.691 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.692 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 75086876 kB' 'MemAvailable: 78853100 kB' 'Buffers: 9308 kB' 'Cached: 11791140 kB' 'SwapCached: 0 kB' 'Active: 8627336 kB' 'Inactive: 3709736 kB' 'Active(anon): 8133060 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539856 kB' 'Mapped: 177316 kB' 'Shmem: 7596436 kB' 'KReclaimable: 398148 kB' 'Slab: 867888 kB' 'SReclaimable: 398148 kB' 'SUnreclaim: 469740 kB' 'KernelStack: 15872 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958500 kB' 'Committed_AS: 9423920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198944 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.693 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.694 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:05:03.695 nr_hugepages=1536 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:05:03.695 resv_hugepages=0 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:05:03.695 surplus_hugepages=0 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:05:03.695 anon_hugepages=0 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 75087496 kB' 'MemAvailable: 78853720 kB' 'Buffers: 9308 kB' 'Cached: 11791184 kB' 'SwapCached: 0 kB' 'Active: 8627360 kB' 'Inactive: 3709736 kB' 'Active(anon): 8133084 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539872 kB' 'Mapped: 177316 kB' 'Shmem: 7596480 kB' 'KReclaimable: 398148 kB' 'Slab: 867888 kB' 'SReclaimable: 398148 kB' 'SUnreclaim: 469740 kB' 'KernelStack: 15872 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958500 kB' 'Committed_AS: 9423940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198944 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.695 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.696 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 43033100 kB' 'MemUsed: 5031836 kB' 'SwapCached: 0 kB' 'Active: 2983112 kB' 'Inactive: 130664 kB' 'Active(anon): 2723940 kB' 'Inactive(anon): 0 kB' 'Active(file): 259172 kB' 'Inactive(file): 130664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2844004 kB' 'Mapped: 139256 kB' 'AnonPages: 272888 kB' 'Shmem: 2454168 kB' 'KernelStack: 8536 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107500 kB' 'Slab: 370068 kB' 'SReclaimable: 107500 kB' 'SUnreclaim: 262568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.697 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220584 kB' 'MemFree: 32054472 kB' 'MemUsed: 12166112 kB' 'SwapCached: 0 kB' 'Active: 5644508 kB' 'Inactive: 3579072 kB' 'Active(anon): 5409404 kB' 'Inactive(anon): 0 kB' 'Active(file): 235104 kB' 'Inactive(file): 3579072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8956492 kB' 'Mapped: 38060 kB' 'AnonPages: 267188 kB' 'Shmem: 5142316 kB' 'KernelStack: 7320 kB' 'PageTables: 3416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 290648 kB' 'Slab: 497820 kB' 'SReclaimable: 290648 kB' 'SUnreclaim: 207172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.698 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.699 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:05:03.700 node0=512 expecting 512 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:05:03.700 node1=1024 expecting 1024 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:03.700 00:05:03.700 real 0m3.075s 00:05:03.700 user 0m1.290s 00:05:03.700 sys 0m1.860s 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.700 15:37:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:03.700 ************************************ 00:05:03.700 END TEST custom_alloc 00:05:03.700 ************************************ 00:05:03.700 15:37:58 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:03.700 15:37:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.700 15:37:58 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.700 15:37:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.700 ************************************ 00:05:03.700 START TEST no_shrink_alloc 00:05:03.700 ************************************ 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1129 -- # no_shrink_alloc 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.700 15:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:06.237 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:06.237 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:06.237 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:06.237 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:06.237 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:06.237 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:06.237 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:06.237 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:06.237 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:06.237 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:06.237 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:06.237 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:06.237 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:06.237 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:06.237 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:06.237 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:06.237 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:06.501 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:05:06.501 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:05:06.501 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:05:06.501 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:05:06.501 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:05:06.501 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76122020 kB' 'MemAvailable: 79888196 kB' 'Buffers: 9308 kB' 'Cached: 11791276 kB' 'SwapCached: 0 kB' 'Active: 8627508 kB' 'Inactive: 3709736 kB' 'Active(anon): 8133232 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540100 kB' 'Mapped: 177448 kB' 'Shmem: 7596572 kB' 'KReclaimable: 398100 kB' 'Slab: 867900 kB' 'SReclaimable: 398100 kB' 'SUnreclaim: 469800 kB' 'KernelStack: 15888 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9424580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198832 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.502 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76122132 kB' 'MemAvailable: 79888308 kB' 'Buffers: 9308 kB' 'Cached: 11791276 kB' 'SwapCached: 0 kB' 'Active: 8627720 kB' 'Inactive: 3709736 kB' 'Active(anon): 8133444 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540340 kB' 'Mapped: 177328 kB' 'Shmem: 7596572 kB' 'KReclaimable: 398100 kB' 'Slab: 867860 kB' 'SReclaimable: 398100 kB' 'SUnreclaim: 469760 kB' 'KernelStack: 15888 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9424596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198816 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.503 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.504 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76121640 kB' 'MemAvailable: 79887816 kB' 'Buffers: 9308 kB' 'Cached: 11791276 kB' 'SwapCached: 0 kB' 'Active: 8627356 kB' 'Inactive: 3709736 kB' 'Active(anon): 8133080 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539976 kB' 'Mapped: 177328 kB' 'Shmem: 7596572 kB' 'KReclaimable: 398100 kB' 'Slab: 867860 kB' 'SReclaimable: 398100 kB' 'SUnreclaim: 469760 kB' 'KernelStack: 15872 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9424252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198816 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.505 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.506 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:05:06.507 nr_hugepages=1024 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:05:06.507 resv_hugepages=0 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:05:06.507 surplus_hugepages=0 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:05:06.507 anon_hugepages=0 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76123016 kB' 'MemAvailable: 79889192 kB' 'Buffers: 9308 kB' 'Cached: 11791316 kB' 'SwapCached: 0 kB' 'Active: 8627396 kB' 'Inactive: 3709736 kB' 'Active(anon): 8133120 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539960 kB' 'Mapped: 177328 kB' 'Shmem: 7596612 kB' 'KReclaimable: 398100 kB' 'Slab: 867860 kB' 'SReclaimable: 398100 kB' 'SUnreclaim: 469760 kB' 'KernelStack: 15856 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9424644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198784 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.507 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.508 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 41982668 kB' 'MemUsed: 6082268 kB' 'SwapCached: 0 kB' 'Active: 2982060 kB' 'Inactive: 130664 kB' 'Active(anon): 2722888 kB' 'Inactive(anon): 0 kB' 'Active(file): 259172 kB' 'Inactive(file): 130664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2844124 kB' 'Mapped: 139268 kB' 'AnonPages: 271792 kB' 'Shmem: 2454288 kB' 'KernelStack: 8552 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107500 kB' 'Slab: 369996 kB' 'SReclaimable: 107500 kB' 'SUnreclaim: 262496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.509 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:05:06.510 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:05:06.511 node0=1024 expecting 1024 00:05:06.511 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:05:06.511 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:05:06.511 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:05:06.511 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:05:06.511 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:05:06.511 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.511 15:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:09.049 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:09.049 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:09.049 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:09.049 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:09.049 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:09.049 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:09.049 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:09.049 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:09.049 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:09.049 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:09.049 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:09.049 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:09.049 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:09.049 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:09.049 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:09.049 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:09.049 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:09.314 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76145616 kB' 'MemAvailable: 79911792 kB' 'Buffers: 9308 kB' 'Cached: 11791404 kB' 'SwapCached: 0 kB' 'Active: 8628436 kB' 'Inactive: 3709736 kB' 'Active(anon): 8134160 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540504 kB' 'Mapped: 177456 kB' 'Shmem: 7596700 kB' 'KReclaimable: 398100 kB' 'Slab: 867912 kB' 'SReclaimable: 398100 kB' 'SUnreclaim: 469812 kB' 'KernelStack: 15888 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9425136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198832 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.314 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.315 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76145156 kB' 'MemAvailable: 79911332 kB' 'Buffers: 9308 kB' 'Cached: 11791408 kB' 'SwapCached: 0 kB' 'Active: 8628228 kB' 'Inactive: 3709736 kB' 'Active(anon): 8133952 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540768 kB' 'Mapped: 177336 kB' 'Shmem: 7596704 kB' 'KReclaimable: 398100 kB' 'Slab: 867924 kB' 'SReclaimable: 398100 kB' 'SUnreclaim: 469824 kB' 'KernelStack: 15888 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9425152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198816 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.316 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76144840 kB' 'MemAvailable: 79911016 kB' 'Buffers: 9308 kB' 'Cached: 11791428 kB' 'SwapCached: 0 kB' 'Active: 8628264 kB' 'Inactive: 3709736 kB' 'Active(anon): 8133988 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540768 kB' 'Mapped: 177336 kB' 'Shmem: 7596724 kB' 'KReclaimable: 398100 kB' 'Slab: 867924 kB' 'SReclaimable: 398100 kB' 'SUnreclaim: 469824 kB' 'KernelStack: 15888 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9425176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198816 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.317 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.318 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:05:09.319 nr_hugepages=1024 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:05:09.319 resv_hugepages=0 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:05:09.319 surplus_hugepages=0 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:05:09.319 anon_hugepages=0 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.319 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76145448 kB' 'MemAvailable: 79911624 kB' 'Buffers: 9308 kB' 'Cached: 11791448 kB' 'SwapCached: 0 kB' 'Active: 8628292 kB' 'Inactive: 3709736 kB' 'Active(anon): 8134016 kB' 'Inactive(anon): 0 kB' 'Active(file): 494276 kB' 'Inactive(file): 3709736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540768 kB' 'Mapped: 177336 kB' 'Shmem: 7596744 kB' 'KReclaimable: 398100 kB' 'Slab: 867924 kB' 'SReclaimable: 398100 kB' 'SUnreclaim: 469824 kB' 'KernelStack: 15888 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9425196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198816 kB' 'VmallocChunk: 0 kB' 'Percpu: 53856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 441856 kB' 'DirectMap2M: 6574080 kB' 'DirectMap1G: 95420416 kB' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.320 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 42001052 kB' 'MemUsed: 6063884 kB' 'SwapCached: 0 kB' 'Active: 2981600 kB' 'Inactive: 130664 kB' 'Active(anon): 2722428 kB' 'Inactive(anon): 0 kB' 'Active(file): 259172 kB' 'Inactive(file): 130664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2844232 kB' 'Mapped: 139780 kB' 'AnonPages: 271284 kB' 'Shmem: 2454396 kB' 'KernelStack: 8552 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107500 kB' 'Slab: 369996 kB' 'SReclaimable: 107500 kB' 'SUnreclaim: 262496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.321 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.322 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:05:09.323 node0=1024 expecting 1024 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:05:09.323 00:05:09.323 real 0m5.907s 00:05:09.323 user 0m2.255s 00:05:09.323 sys 0m3.755s 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.323 15:38:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:09.323 ************************************ 00:05:09.323 END TEST no_shrink_alloc 00:05:09.323 ************************************ 00:05:09.582 15:38:04 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:05:09.582 15:38:04 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:05:09.582 15:38:04 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:05:09.582 15:38:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.582 15:38:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:05:09.582 15:38:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.582 15:38:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:05:09.582 15:38:04 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:05:09.582 15:38:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.582 15:38:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:05:09.582 15:38:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.582 15:38:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:05:09.582 15:38:04 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:05:09.582 15:38:04 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:05:09.582 00:05:09.582 real 0m22.592s 00:05:09.582 user 0m7.658s 00:05:09.582 sys 0m12.199s 00:05:09.582 15:38:04 setup.sh.hugepages -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.582 15:38:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:09.582 ************************************ 00:05:09.582 END TEST hugepages 00:05:09.582 ************************************ 00:05:09.582 15:38:04 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:05:09.582 15:38:04 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.582 15:38:04 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.582 15:38:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:09.582 ************************************ 00:05:09.582 START TEST driver 00:05:09.582 ************************************ 00:05:09.582 15:38:04 setup.sh.driver -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:05:09.582 * Looking for test storage... 00:05:09.582 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:09.582 15:38:04 setup.sh.driver -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.582 15:38:04 setup.sh.driver -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.582 15:38:04 setup.sh.driver -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.855 15:38:04 setup.sh.driver -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.855 15:38:04 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:05:09.855 15:38:04 setup.sh.driver -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.855 15:38:04 setup.sh.driver -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.855 --rc genhtml_branch_coverage=1 00:05:09.855 --rc genhtml_function_coverage=1 00:05:09.855 --rc genhtml_legend=1 00:05:09.855 --rc geninfo_all_blocks=1 00:05:09.855 --rc geninfo_unexecuted_blocks=1 00:05:09.855 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:09.855 ' 00:05:09.855 15:38:04 setup.sh.driver -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.855 --rc genhtml_branch_coverage=1 00:05:09.855 --rc genhtml_function_coverage=1 00:05:09.855 --rc genhtml_legend=1 00:05:09.855 --rc geninfo_all_blocks=1 00:05:09.855 --rc geninfo_unexecuted_blocks=1 00:05:09.855 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:09.855 ' 00:05:09.855 15:38:04 setup.sh.driver -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.855 --rc genhtml_branch_coverage=1 00:05:09.855 --rc genhtml_function_coverage=1 00:05:09.855 --rc genhtml_legend=1 00:05:09.855 --rc geninfo_all_blocks=1 00:05:09.855 --rc geninfo_unexecuted_blocks=1 00:05:09.855 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:09.855 ' 00:05:09.855 15:38:04 setup.sh.driver -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.855 --rc genhtml_branch_coverage=1 00:05:09.855 --rc genhtml_function_coverage=1 00:05:09.855 --rc genhtml_legend=1 00:05:09.855 --rc geninfo_all_blocks=1 00:05:09.855 --rc geninfo_unexecuted_blocks=1 00:05:09.855 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:09.855 ' 00:05:09.855 15:38:04 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:09.855 15:38:04 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.855 15:38:04 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:14.046 15:38:08 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:14.046 15:38:08 setup.sh.driver -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.046 15:38:08 setup.sh.driver -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.046 15:38:08 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:14.046 ************************************ 00:05:14.046 START TEST guess_driver 00:05:14.046 ************************************ 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- common/autotest_common.sh@1129 -- # guess_driver 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 160 > 0 )) 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:14.046 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:14.046 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:14.046 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:14.046 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:14.046 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:14.046 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:14.046 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:14.046 Looking for driver=vfio-pci 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.046 15:38:08 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.583 15:38:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.075 15:38:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.075 15:38:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.075 15:38:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.075 15:38:14 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:20.075 15:38:14 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:20.075 15:38:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:20.075 15:38:14 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:24.459 00:05:24.459 real 0m10.532s 00:05:24.459 user 0m2.305s 00:05:24.459 sys 0m4.412s 00:05:24.459 15:38:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.459 15:38:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:24.459 ************************************ 00:05:24.459 END TEST guess_driver 00:05:24.459 ************************************ 00:05:24.459 00:05:24.459 real 0m14.648s 00:05:24.459 user 0m3.407s 00:05:24.459 sys 0m6.650s 00:05:24.459 15:38:19 setup.sh.driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.459 15:38:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:24.459 ************************************ 00:05:24.459 END TEST driver 00:05:24.459 ************************************ 00:05:24.459 15:38:19 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:05:24.459 15:38:19 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.459 15:38:19 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.459 15:38:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:24.459 ************************************ 00:05:24.459 START TEST devices 00:05:24.459 ************************************ 00:05:24.459 15:38:19 setup.sh.devices -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:05:24.459 * Looking for test storage... 00:05:24.459 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:24.459 15:38:19 setup.sh.devices -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.459 15:38:19 setup.sh.devices -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.459 15:38:19 setup.sh.devices -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.459 15:38:19 setup.sh.devices -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.459 15:38:19 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:05:24.459 15:38:19 setup.sh.devices -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.459 15:38:19 setup.sh.devices -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.459 --rc genhtml_branch_coverage=1 00:05:24.459 --rc genhtml_function_coverage=1 00:05:24.459 --rc genhtml_legend=1 00:05:24.459 --rc geninfo_all_blocks=1 00:05:24.459 --rc geninfo_unexecuted_blocks=1 00:05:24.459 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:24.459 ' 00:05:24.459 15:38:19 setup.sh.devices -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.459 --rc genhtml_branch_coverage=1 00:05:24.459 --rc genhtml_function_coverage=1 00:05:24.459 --rc genhtml_legend=1 00:05:24.459 --rc geninfo_all_blocks=1 00:05:24.459 --rc geninfo_unexecuted_blocks=1 00:05:24.459 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:24.459 ' 00:05:24.459 15:38:19 setup.sh.devices -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.459 --rc genhtml_branch_coverage=1 00:05:24.459 --rc genhtml_function_coverage=1 00:05:24.459 --rc genhtml_legend=1 00:05:24.459 --rc geninfo_all_blocks=1 00:05:24.459 --rc geninfo_unexecuted_blocks=1 00:05:24.459 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:24.459 ' 00:05:24.459 15:38:19 setup.sh.devices -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.459 --rc genhtml_branch_coverage=1 00:05:24.459 --rc genhtml_function_coverage=1 00:05:24.459 --rc genhtml_legend=1 00:05:24.459 --rc geninfo_all_blocks=1 00:05:24.459 --rc geninfo_unexecuted_blocks=1 00:05:24.459 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:24.459 ' 00:05:24.459 15:38:19 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:24.459 15:38:19 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:24.459 15:38:19 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:24.459 15:38:19 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:27.753 15:38:22 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:27.753 15:38:22 setup.sh.devices -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:27.753 15:38:22 setup.sh.devices -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:27.753 15:38:22 setup.sh.devices -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:27.753 15:38:22 setup.sh.devices -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:27.753 15:38:22 setup.sh.devices -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:27.753 15:38:22 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:27.753 15:38:22 setup.sh.devices -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:05:27.753 15:38:22 setup.sh.devices -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:27.753 15:38:22 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:27.753 15:38:22 setup.sh.devices -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:27.753 15:38:22 setup.sh.devices -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:27.753 15:38:22 setup.sh.devices -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:27.753 15:38:22 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:27.753 15:38:22 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:27.753 15:38:22 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:27.753 15:38:22 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:27.753 15:38:22 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:27.753 15:38:22 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:27.753 15:38:22 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:27.753 15:38:22 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:27.753 15:38:22 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:05:27.753 15:38:22 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:05:27.753 15:38:22 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:27.753 15:38:22 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:05:27.753 15:38:22 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:28.013 No valid GPT data, bailing 00:05:28.013 15:38:22 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:28.013 15:38:23 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:05:28.013 15:38:23 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:05:28.013 15:38:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:28.013 15:38:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:28.013 15:38:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:28.013 15:38:23 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:05:28.013 15:38:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:05:28.013 15:38:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:28.013 15:38:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:05:28.013 15:38:23 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:28.013 15:38:23 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:28.013 15:38:23 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:28.013 15:38:23 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.013 15:38:23 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.013 15:38:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:28.013 ************************************ 00:05:28.013 START TEST nvme_mount 00:05:28.013 ************************************ 00:05:28.013 15:38:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1129 -- # nvme_mount 00:05:28.013 15:38:23 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:28.013 15:38:23 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:28.013 15:38:23 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.013 15:38:23 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:28.013 15:38:23 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:28.013 15:38:23 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:28.013 15:38:23 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:28.013 15:38:23 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:28.013 15:38:23 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:28.014 15:38:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:28.014 15:38:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:28.014 15:38:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:28.014 15:38:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:28.014 15:38:23 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:28.014 15:38:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:28.014 15:38:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:28.014 15:38:23 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:28.014 15:38:23 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:28.014 15:38:23 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:28.954 Creating new GPT entries in memory. 00:05:28.954 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:28.954 other utilities. 00:05:28.954 15:38:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:28.954 15:38:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.954 15:38:24 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:28.954 15:38:24 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:28.954 15:38:24 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:29.894 Creating new GPT entries in memory. 00:05:29.894 The operation has completed successfully. 00:05:29.894 15:38:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:29.895 15:38:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:29.895 15:38:25 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 879909 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.153 15:38:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.451 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:33.452 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:33.452 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:33.452 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:33.452 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:33.452 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:33.452 15:38:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:33.712 15:38:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.712 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:33.712 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:33.712 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:33.713 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.713 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:33.713 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:33.713 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:33.713 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:33.713 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:33.713 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.713 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:33.713 15:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:33.713 15:38:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.713 15:38:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.256 15:38:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:38.799 15:38:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.059 15:38:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:39.059 15:38:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:39.059 15:38:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:39.059 15:38:34 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:39.060 15:38:34 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:39.060 15:38:34 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:39.060 15:38:34 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:39.060 15:38:34 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:39.060 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:39.060 00:05:39.060 real 0m11.084s 00:05:39.060 user 0m3.066s 00:05:39.060 sys 0m5.765s 00:05:39.060 15:38:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.060 15:38:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:39.060 ************************************ 00:05:39.060 END TEST nvme_mount 00:05:39.060 ************************************ 00:05:39.060 15:38:34 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:39.060 15:38:34 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.060 15:38:34 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.060 15:38:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:39.060 ************************************ 00:05:39.060 START TEST dm_mount 00:05:39.060 ************************************ 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@1129 -- # dm_mount 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:39.060 15:38:34 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:39.999 Creating new GPT entries in memory. 00:05:39.999 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:39.999 other utilities. 00:05:39.999 15:38:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:39.999 15:38:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:39.999 15:38:35 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:39.999 15:38:35 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:39.999 15:38:35 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:41.381 Creating new GPT entries in memory. 00:05:41.381 The operation has completed successfully. 00:05:41.381 15:38:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:41.381 15:38:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:41.381 15:38:36 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:41.381 15:38:36 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:41.381 15:38:36 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:42.322 The operation has completed successfully. 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 883537 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.322 15:38:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:44.863 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.864 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:44.864 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.864 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:44.864 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.864 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:44.864 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.123 15:38:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:48.418 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:48.418 00:05:48.418 real 0m9.233s 00:05:48.418 user 0m2.061s 00:05:48.418 sys 0m4.200s 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.418 15:38:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:48.418 ************************************ 00:05:48.418 END TEST dm_mount 00:05:48.418 ************************************ 00:05:48.418 15:38:43 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:48.418 15:38:43 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:48.418 15:38:43 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:48.418 15:38:43 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:48.418 15:38:43 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:48.418 15:38:43 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:48.418 15:38:43 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:48.678 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:48.678 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:48.678 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:48.678 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:48.678 15:38:43 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:48.678 15:38:43 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:48.678 15:38:43 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:48.678 15:38:43 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:48.678 15:38:43 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:48.678 15:38:43 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:48.678 15:38:43 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:48.678 00:05:48.678 real 0m24.391s 00:05:48.678 user 0m6.630s 00:05:48.678 sys 0m12.453s 00:05:48.678 15:38:43 setup.sh.devices -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.678 15:38:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:48.678 ************************************ 00:05:48.678 END TEST devices 00:05:48.678 ************************************ 00:05:48.678 00:05:48.678 real 1m25.423s 00:05:48.678 user 0m24.676s 00:05:48.678 sys 0m44.432s 00:05:48.678 15:38:43 setup.sh -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.678 15:38:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:48.678 ************************************ 00:05:48.678 END TEST setup.sh 00:05:48.678 ************************************ 00:05:48.678 15:38:43 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:51.978 Hugepages 00:05:51.978 node hugesize free / total 00:05:51.978 node0 1048576kB 0 / 0 00:05:51.978 node0 2048kB 1024 / 1024 00:05:51.978 node1 1048576kB 0 / 0 00:05:51.978 node1 2048kB 1024 / 1024 00:05:51.978 00:05:51.978 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:51.978 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:51.978 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:51.978 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:51.978 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:51.978 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:51.978 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:51.978 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:51.978 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:51.978 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:51.978 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:51.978 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:51.978 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:51.978 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:51.978 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:51.978 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:51.978 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:51.978 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:51.978 15:38:47 -- spdk/autotest.sh@117 -- # uname -s 00:05:51.978 15:38:47 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:51.978 15:38:47 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:51.978 15:38:47 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:55.266 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:55.266 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:55.266 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:55.266 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:55.266 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:55.266 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:55.266 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:55.266 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:55.266 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:55.266 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:55.266 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:55.266 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:55.266 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:55.266 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:55.266 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:55.266 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:58.557 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:58.557 15:38:53 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:59.493 15:38:54 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:59.493 15:38:54 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:59.493 15:38:54 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:59.493 15:38:54 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:59.493 15:38:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:59.493 15:38:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:59.493 15:38:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:59.493 15:38:54 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:59.493 15:38:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:59.493 15:38:54 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:59.493 15:38:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:05:59.493 15:38:54 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:02.026 Waiting for block devices as requested 00:06:02.285 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:06:02.285 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:02.543 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:02.543 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:02.543 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:02.543 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:02.802 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:02.802 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:02.802 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:03.062 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:03.062 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:03.062 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:03.322 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:03.322 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:03.322 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:03.322 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:03.582 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:03.582 15:38:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:03.582 15:38:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:06:03.582 15:38:58 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:06:03.582 15:38:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:03.582 15:38:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:03.582 15:38:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:06:03.582 15:38:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:03.582 15:38:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:03.582 15:38:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:03.582 15:38:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:03.582 15:38:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:03.582 15:38:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:03.582 15:38:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:03.582 15:38:58 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:06:03.582 15:38:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:03.582 15:38:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:03.582 15:38:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:03.582 15:38:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:03.582 15:38:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:03.582 15:38:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:03.582 15:38:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:03.582 15:38:58 -- common/autotest_common.sh@1543 -- # continue 00:06:03.582 15:38:58 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:03.582 15:38:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:03.582 15:38:58 -- common/autotest_common.sh@10 -- # set +x 00:06:03.582 15:38:58 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:03.582 15:38:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.582 15:38:58 -- common/autotest_common.sh@10 -- # set +x 00:06:03.582 15:38:58 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:06.869 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:06.869 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:06.869 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:06.869 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:06.869 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:06.869 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:06.869 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:06.869 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:06.869 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:06.869 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:06.869 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:06.869 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:06.869 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:06.869 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:06.869 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:06.870 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:10.150 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:06:10.150 15:39:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:10.150 15:39:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.150 15:39:05 -- common/autotest_common.sh@10 -- # set +x 00:06:10.150 15:39:05 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:10.150 15:39:05 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:10.150 15:39:05 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:10.150 15:39:05 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:10.150 15:39:05 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:10.150 15:39:05 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:10.150 15:39:05 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:10.150 15:39:05 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:10.150 15:39:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:10.150 15:39:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:10.150 15:39:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:10.150 15:39:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:10.150 15:39:05 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:10.150 15:39:05 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:10.150 15:39:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:06:10.150 15:39:05 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:10.150 15:39:05 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:06:10.150 15:39:05 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:06:10.150 15:39:05 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:10.150 15:39:05 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:06:10.150 15:39:05 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:06:10.150 15:39:05 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:06:10.150 15:39:05 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:06:10.150 15:39:05 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=892046 00:06:10.150 15:39:05 -- common/autotest_common.sh@1585 -- # waitforlisten 892046 00:06:10.150 15:39:05 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.150 15:39:05 -- common/autotest_common.sh@835 -- # '[' -z 892046 ']' 00:06:10.150 15:39:05 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.150 15:39:05 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.150 15:39:05 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.150 15:39:05 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.150 15:39:05 -- common/autotest_common.sh@10 -- # set +x 00:06:10.150 [2024-12-09 15:39:05.243737] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:10.150 [2024-12-09 15:39:05.243804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid892046 ] 00:06:10.150 [2024-12-09 15:39:05.317254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.150 [2024-12-09 15:39:05.363236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.409 15:39:05 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.409 15:39:05 -- common/autotest_common.sh@868 -- # return 0 00:06:10.409 15:39:05 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:06:10.409 15:39:05 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:06:10.409 15:39:05 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:06:13.702 nvme0n1 00:06:13.702 15:39:08 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:13.702 [2024-12-09 15:39:08.765539] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:13.702 request: 00:06:13.702 { 00:06:13.702 "nvme_ctrlr_name": "nvme0", 00:06:13.702 "password": "test", 00:06:13.702 "method": "bdev_nvme_opal_revert", 00:06:13.702 "req_id": 1 00:06:13.702 } 00:06:13.702 Got JSON-RPC error response 00:06:13.702 response: 00:06:13.702 { 00:06:13.702 "code": -32602, 00:06:13.702 "message": "Invalid parameters" 00:06:13.702 } 00:06:13.702 15:39:08 -- common/autotest_common.sh@1591 -- # true 00:06:13.702 15:39:08 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:06:13.702 15:39:08 -- common/autotest_common.sh@1595 -- # killprocess 892046 00:06:13.702 15:39:08 -- common/autotest_common.sh@954 -- # '[' -z 892046 ']' 00:06:13.702 15:39:08 -- common/autotest_common.sh@958 -- # kill -0 892046 00:06:13.702 15:39:08 -- common/autotest_common.sh@959 -- # uname 00:06:13.702 15:39:08 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.702 15:39:08 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 892046 00:06:13.702 15:39:08 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.702 15:39:08 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.702 15:39:08 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 892046' 00:06:13.702 killing process with pid 892046 00:06:13.702 15:39:08 -- common/autotest_common.sh@973 -- # kill 892046 00:06:13.702 15:39:08 -- common/autotest_common.sh@978 -- # wait 892046 00:06:17.890 15:39:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:17.890 15:39:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:17.890 15:39:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:17.890 15:39:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:17.890 15:39:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:17.890 15:39:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.890 15:39:12 -- common/autotest_common.sh@10 -- # set +x 00:06:17.890 15:39:12 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:17.890 15:39:12 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:17.890 15:39:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.890 15:39:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.890 15:39:12 -- common/autotest_common.sh@10 -- # set +x 00:06:17.890 ************************************ 00:06:17.890 START TEST env 00:06:17.890 ************************************ 00:06:17.890 15:39:12 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:17.890 * Looking for test storage... 00:06:17.890 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:06:17.890 15:39:12 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:17.890 15:39:12 env -- common/autotest_common.sh@1711 -- # lcov --version 00:06:17.890 15:39:12 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:17.890 15:39:13 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:17.890 15:39:13 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.890 15:39:13 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.890 15:39:13 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.890 15:39:13 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.890 15:39:13 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.890 15:39:13 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.890 15:39:13 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.890 15:39:13 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.890 15:39:13 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.890 15:39:13 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.890 15:39:13 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.890 15:39:13 env -- scripts/common.sh@344 -- # case "$op" in 00:06:17.890 15:39:13 env -- scripts/common.sh@345 -- # : 1 00:06:17.890 15:39:13 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.890 15:39:13 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.890 15:39:13 env -- scripts/common.sh@365 -- # decimal 1 00:06:17.890 15:39:13 env -- scripts/common.sh@353 -- # local d=1 00:06:17.890 15:39:13 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.890 15:39:13 env -- scripts/common.sh@355 -- # echo 1 00:06:17.890 15:39:13 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.890 15:39:13 env -- scripts/common.sh@366 -- # decimal 2 00:06:17.890 15:39:13 env -- scripts/common.sh@353 -- # local d=2 00:06:17.890 15:39:13 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.890 15:39:13 env -- scripts/common.sh@355 -- # echo 2 00:06:17.890 15:39:13 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.890 15:39:13 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.890 15:39:13 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.890 15:39:13 env -- scripts/common.sh@368 -- # return 0 00:06:17.890 15:39:13 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.890 15:39:13 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:17.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.890 --rc genhtml_branch_coverage=1 00:06:17.890 --rc genhtml_function_coverage=1 00:06:17.890 --rc genhtml_legend=1 00:06:17.890 --rc geninfo_all_blocks=1 00:06:17.890 --rc geninfo_unexecuted_blocks=1 00:06:17.890 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:17.890 ' 00:06:17.890 15:39:13 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:17.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.890 --rc genhtml_branch_coverage=1 00:06:17.890 --rc genhtml_function_coverage=1 00:06:17.890 --rc genhtml_legend=1 00:06:17.890 --rc geninfo_all_blocks=1 00:06:17.890 --rc geninfo_unexecuted_blocks=1 00:06:17.890 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:17.890 ' 00:06:17.890 15:39:13 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:17.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.890 --rc genhtml_branch_coverage=1 00:06:17.890 --rc genhtml_function_coverage=1 00:06:17.890 --rc genhtml_legend=1 00:06:17.890 --rc geninfo_all_blocks=1 00:06:17.890 --rc geninfo_unexecuted_blocks=1 00:06:17.890 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:17.890 ' 00:06:17.890 15:39:13 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:17.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.890 --rc genhtml_branch_coverage=1 00:06:17.890 --rc genhtml_function_coverage=1 00:06:17.890 --rc genhtml_legend=1 00:06:17.890 --rc geninfo_all_blocks=1 00:06:17.890 --rc geninfo_unexecuted_blocks=1 00:06:17.890 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:17.890 ' 00:06:17.890 15:39:13 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:17.890 15:39:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.890 15:39:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.890 15:39:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.890 ************************************ 00:06:17.890 START TEST env_memory 00:06:17.890 ************************************ 00:06:17.890 15:39:13 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:17.890 00:06:17.890 00:06:17.890 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.890 http://cunit.sourceforge.net/ 00:06:17.890 00:06:17.890 00:06:17.890 Suite: memory 00:06:17.890 Test: alloc and free memory map ...[2024-12-09 15:39:13.088497] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:17.890 passed 00:06:17.890 Test: mem map translation ...[2024-12-09 15:39:13.103190] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:17.890 [2024-12-09 15:39:13.103209] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:17.890 [2024-12-09 15:39:13.103243] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:17.890 [2024-12-09 15:39:13.103252] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:18.149 passed 00:06:18.149 Test: mem map registration ...[2024-12-09 15:39:13.126465] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:18.150 [2024-12-09 15:39:13.126484] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:18.150 passed 00:06:18.150 Test: mem map adjacent registrations ...passed 00:06:18.150 00:06:18.150 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.150 suites 1 1 n/a 0 0 00:06:18.150 tests 4 4 4 0 0 00:06:18.150 asserts 152 152 152 0 n/a 00:06:18.150 00:06:18.150 Elapsed time = 0.090 seconds 00:06:18.150 00:06:18.150 real 0m0.103s 00:06:18.150 user 0m0.089s 00:06:18.150 sys 0m0.013s 00:06:18.150 15:39:13 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.150 15:39:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:18.150 ************************************ 00:06:18.150 END TEST env_memory 00:06:18.150 ************************************ 00:06:18.150 15:39:13 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:18.150 15:39:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.150 15:39:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.150 15:39:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.150 ************************************ 00:06:18.150 START TEST env_vtophys 00:06:18.150 ************************************ 00:06:18.150 15:39:13 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:18.150 EAL: lib.eal log level changed from notice to debug 00:06:18.150 EAL: Detected lcore 0 as core 0 on socket 0 00:06:18.150 EAL: Detected lcore 1 as core 1 on socket 0 00:06:18.150 EAL: Detected lcore 2 as core 2 on socket 0 00:06:18.150 EAL: Detected lcore 3 as core 3 on socket 0 00:06:18.150 EAL: Detected lcore 4 as core 4 on socket 0 00:06:18.150 EAL: Detected lcore 5 as core 8 on socket 0 00:06:18.150 EAL: Detected lcore 6 as core 9 on socket 0 00:06:18.150 EAL: Detected lcore 7 as core 10 on socket 0 00:06:18.150 EAL: Detected lcore 8 as core 11 on socket 0 00:06:18.150 EAL: Detected lcore 9 as core 16 on socket 0 00:06:18.150 EAL: Detected lcore 10 as core 17 on socket 0 00:06:18.150 EAL: Detected lcore 11 as core 18 on socket 0 00:06:18.150 EAL: Detected lcore 12 as core 19 on socket 0 00:06:18.150 EAL: Detected lcore 13 as core 20 on socket 0 00:06:18.150 EAL: Detected lcore 14 as core 24 on socket 0 00:06:18.150 EAL: Detected lcore 15 as core 25 on socket 0 00:06:18.150 EAL: Detected lcore 16 as core 26 on socket 0 00:06:18.150 EAL: Detected lcore 17 as core 27 on socket 0 00:06:18.150 EAL: Detected lcore 18 as core 0 on socket 1 00:06:18.150 EAL: Detected lcore 19 as core 1 on socket 1 00:06:18.150 EAL: Detected lcore 20 as core 2 on socket 1 00:06:18.150 EAL: Detected lcore 21 as core 3 on socket 1 00:06:18.150 EAL: Detected lcore 22 as core 4 on socket 1 00:06:18.150 EAL: Detected lcore 23 as core 8 on socket 1 00:06:18.150 EAL: Detected lcore 24 as core 9 on socket 1 00:06:18.150 EAL: Detected lcore 25 as core 10 on socket 1 00:06:18.150 EAL: Detected lcore 26 as core 11 on socket 1 00:06:18.150 EAL: Detected lcore 27 as core 16 on socket 1 00:06:18.150 EAL: Detected lcore 28 as core 17 on socket 1 00:06:18.150 EAL: Detected lcore 29 as core 18 on socket 1 00:06:18.150 EAL: Detected lcore 30 as core 19 on socket 1 00:06:18.150 EAL: Detected lcore 31 as core 20 on socket 1 00:06:18.150 EAL: Detected lcore 32 as core 24 on socket 1 00:06:18.150 EAL: Detected lcore 33 as core 25 on socket 1 00:06:18.150 EAL: Detected lcore 34 as core 26 on socket 1 00:06:18.150 EAL: Detected lcore 35 as core 27 on socket 1 00:06:18.150 EAL: Detected lcore 36 as core 0 on socket 0 00:06:18.150 EAL: Detected lcore 37 as core 1 on socket 0 00:06:18.150 EAL: Detected lcore 38 as core 2 on socket 0 00:06:18.150 EAL: Detected lcore 39 as core 3 on socket 0 00:06:18.150 EAL: Detected lcore 40 as core 4 on socket 0 00:06:18.150 EAL: Detected lcore 41 as core 8 on socket 0 00:06:18.150 EAL: Detected lcore 42 as core 9 on socket 0 00:06:18.150 EAL: Detected lcore 43 as core 10 on socket 0 00:06:18.150 EAL: Detected lcore 44 as core 11 on socket 0 00:06:18.150 EAL: Detected lcore 45 as core 16 on socket 0 00:06:18.150 EAL: Detected lcore 46 as core 17 on socket 0 00:06:18.150 EAL: Detected lcore 47 as core 18 on socket 0 00:06:18.150 EAL: Detected lcore 48 as core 19 on socket 0 00:06:18.150 EAL: Detected lcore 49 as core 20 on socket 0 00:06:18.150 EAL: Detected lcore 50 as core 24 on socket 0 00:06:18.150 EAL: Detected lcore 51 as core 25 on socket 0 00:06:18.150 EAL: Detected lcore 52 as core 26 on socket 0 00:06:18.150 EAL: Detected lcore 53 as core 27 on socket 0 00:06:18.150 EAL: Detected lcore 54 as core 0 on socket 1 00:06:18.150 EAL: Detected lcore 55 as core 1 on socket 1 00:06:18.150 EAL: Detected lcore 56 as core 2 on socket 1 00:06:18.150 EAL: Detected lcore 57 as core 3 on socket 1 00:06:18.150 EAL: Detected lcore 58 as core 4 on socket 1 00:06:18.150 EAL: Detected lcore 59 as core 8 on socket 1 00:06:18.150 EAL: Detected lcore 60 as core 9 on socket 1 00:06:18.150 EAL: Detected lcore 61 as core 10 on socket 1 00:06:18.150 EAL: Detected lcore 62 as core 11 on socket 1 00:06:18.150 EAL: Detected lcore 63 as core 16 on socket 1 00:06:18.150 EAL: Detected lcore 64 as core 17 on socket 1 00:06:18.150 EAL: Detected lcore 65 as core 18 on socket 1 00:06:18.150 EAL: Detected lcore 66 as core 19 on socket 1 00:06:18.150 EAL: Detected lcore 67 as core 20 on socket 1 00:06:18.150 EAL: Detected lcore 68 as core 24 on socket 1 00:06:18.150 EAL: Detected lcore 69 as core 25 on socket 1 00:06:18.150 EAL: Detected lcore 70 as core 26 on socket 1 00:06:18.150 EAL: Detected lcore 71 as core 27 on socket 1 00:06:18.150 EAL: Maximum logical cores by configuration: 128 00:06:18.150 EAL: Detected CPU lcores: 72 00:06:18.150 EAL: Detected NUMA nodes: 2 00:06:18.150 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:18.150 EAL: Checking presence of .so 'librte_eal.so.24' 00:06:18.150 EAL: Checking presence of .so 'librte_eal.so' 00:06:18.150 EAL: Detected static linkage of DPDK 00:06:18.150 EAL: No shared files mode enabled, IPC will be disabled 00:06:18.150 EAL: Bus pci wants IOVA as 'DC' 00:06:18.150 EAL: Buses did not request a specific IOVA mode. 00:06:18.150 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:18.150 EAL: Selected IOVA mode 'VA' 00:06:18.150 EAL: Probing VFIO support... 00:06:18.150 EAL: IOMMU type 1 (Type 1) is supported 00:06:18.150 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:18.150 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:18.150 EAL: VFIO support initialized 00:06:18.150 EAL: Ask a virtual area of 0x2e000 bytes 00:06:18.150 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:18.150 EAL: Setting up physically contiguous memory... 00:06:18.150 EAL: Setting maximum number of open files to 524288 00:06:18.150 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:18.150 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:18.150 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:18.150 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.150 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:18.150 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.150 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.150 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:18.150 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:18.150 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.150 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:18.150 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.150 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.150 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:18.150 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:18.150 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.150 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:18.150 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.150 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.150 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:18.150 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:18.150 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.150 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:18.150 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.150 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.150 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:18.150 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:18.150 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:18.150 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.150 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:18.150 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:18.150 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.150 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:18.150 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:18.150 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.150 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:18.150 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:18.150 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.150 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:18.150 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:18.150 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.150 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:18.150 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:18.150 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.150 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:18.150 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:18.150 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.150 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:18.150 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:18.150 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.150 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:18.150 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:18.150 EAL: Hugepages will be freed exactly as allocated. 00:06:18.150 EAL: No shared files mode enabled, IPC is disabled 00:06:18.150 EAL: No shared files mode enabled, IPC is disabled 00:06:18.150 EAL: TSC frequency is ~2300000 KHz 00:06:18.150 EAL: Main lcore 0 is ready (tid=7f6b42adaa00;cpuset=[0]) 00:06:18.150 EAL: Trying to obtain current memory policy. 00:06:18.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.150 EAL: Restoring previous memory policy: 0 00:06:18.150 EAL: request: mp_malloc_sync 00:06:18.150 EAL: No shared files mode enabled, IPC is disabled 00:06:18.150 EAL: Heap on socket 0 was expanded by 2MB 00:06:18.150 EAL: No shared files mode enabled, IPC is disabled 00:06:18.151 EAL: Mem event callback 'spdk:(nil)' registered 00:06:18.151 00:06:18.151 00:06:18.151 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.151 http://cunit.sourceforge.net/ 00:06:18.151 00:06:18.151 00:06:18.151 Suite: components_suite 00:06:18.151 Test: vtophys_malloc_test ...passed 00:06:18.151 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:18.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.151 EAL: Restoring previous memory policy: 4 00:06:18.151 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.151 EAL: request: mp_malloc_sync 00:06:18.151 EAL: No shared files mode enabled, IPC is disabled 00:06:18.151 EAL: Heap on socket 0 was expanded by 4MB 00:06:18.151 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.151 EAL: request: mp_malloc_sync 00:06:18.151 EAL: No shared files mode enabled, IPC is disabled 00:06:18.151 EAL: Heap on socket 0 was shrunk by 4MB 00:06:18.151 EAL: Trying to obtain current memory policy. 00:06:18.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.151 EAL: Restoring previous memory policy: 4 00:06:18.151 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.151 EAL: request: mp_malloc_sync 00:06:18.151 EAL: No shared files mode enabled, IPC is disabled 00:06:18.151 EAL: Heap on socket 0 was expanded by 6MB 00:06:18.151 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.151 EAL: request: mp_malloc_sync 00:06:18.151 EAL: No shared files mode enabled, IPC is disabled 00:06:18.151 EAL: Heap on socket 0 was shrunk by 6MB 00:06:18.151 EAL: Trying to obtain current memory policy. 00:06:18.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.151 EAL: Restoring previous memory policy: 4 00:06:18.151 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.151 EAL: request: mp_malloc_sync 00:06:18.151 EAL: No shared files mode enabled, IPC is disabled 00:06:18.151 EAL: Heap on socket 0 was expanded by 10MB 00:06:18.151 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.151 EAL: request: mp_malloc_sync 00:06:18.151 EAL: No shared files mode enabled, IPC is disabled 00:06:18.151 EAL: Heap on socket 0 was shrunk by 10MB 00:06:18.151 EAL: Trying to obtain current memory policy. 00:06:18.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.151 EAL: Restoring previous memory policy: 4 00:06:18.151 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.151 EAL: request: mp_malloc_sync 00:06:18.151 EAL: No shared files mode enabled, IPC is disabled 00:06:18.151 EAL: Heap on socket 0 was expanded by 18MB 00:06:18.151 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.151 EAL: request: mp_malloc_sync 00:06:18.151 EAL: No shared files mode enabled, IPC is disabled 00:06:18.151 EAL: Heap on socket 0 was shrunk by 18MB 00:06:18.151 EAL: Trying to obtain current memory policy. 00:06:18.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.151 EAL: Restoring previous memory policy: 4 00:06:18.151 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.151 EAL: request: mp_malloc_sync 00:06:18.151 EAL: No shared files mode enabled, IPC is disabled 00:06:18.151 EAL: Heap on socket 0 was expanded by 34MB 00:06:18.151 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.151 EAL: request: mp_malloc_sync 00:06:18.151 EAL: No shared files mode enabled, IPC is disabled 00:06:18.151 EAL: Heap on socket 0 was shrunk by 34MB 00:06:18.151 EAL: Trying to obtain current memory policy. 00:06:18.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.151 EAL: Restoring previous memory policy: 4 00:06:18.151 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.151 EAL: request: mp_malloc_sync 00:06:18.151 EAL: No shared files mode enabled, IPC is disabled 00:06:18.151 EAL: Heap on socket 0 was expanded by 66MB 00:06:18.151 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.410 EAL: request: mp_malloc_sync 00:06:18.410 EAL: No shared files mode enabled, IPC is disabled 00:06:18.410 EAL: Heap on socket 0 was shrunk by 66MB 00:06:18.410 EAL: Trying to obtain current memory policy. 00:06:18.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.410 EAL: Restoring previous memory policy: 4 00:06:18.410 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.410 EAL: request: mp_malloc_sync 00:06:18.410 EAL: No shared files mode enabled, IPC is disabled 00:06:18.410 EAL: Heap on socket 0 was expanded by 130MB 00:06:18.410 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.410 EAL: request: mp_malloc_sync 00:06:18.410 EAL: No shared files mode enabled, IPC is disabled 00:06:18.410 EAL: Heap on socket 0 was shrunk by 130MB 00:06:18.410 EAL: Trying to obtain current memory policy. 00:06:18.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.410 EAL: Restoring previous memory policy: 4 00:06:18.410 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.410 EAL: request: mp_malloc_sync 00:06:18.410 EAL: No shared files mode enabled, IPC is disabled 00:06:18.410 EAL: Heap on socket 0 was expanded by 258MB 00:06:18.410 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.410 EAL: request: mp_malloc_sync 00:06:18.410 EAL: No shared files mode enabled, IPC is disabled 00:06:18.410 EAL: Heap on socket 0 was shrunk by 258MB 00:06:18.410 EAL: Trying to obtain current memory policy. 00:06:18.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.668 EAL: Restoring previous memory policy: 4 00:06:18.668 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.669 EAL: request: mp_malloc_sync 00:06:18.669 EAL: No shared files mode enabled, IPC is disabled 00:06:18.669 EAL: Heap on socket 0 was expanded by 514MB 00:06:18.669 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.669 EAL: request: mp_malloc_sync 00:06:18.669 EAL: No shared files mode enabled, IPC is disabled 00:06:18.669 EAL: Heap on socket 0 was shrunk by 514MB 00:06:18.669 EAL: Trying to obtain current memory policy. 00:06:18.669 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.927 EAL: Restoring previous memory policy: 4 00:06:18.927 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.927 EAL: request: mp_malloc_sync 00:06:18.927 EAL: No shared files mode enabled, IPC is disabled 00:06:18.927 EAL: Heap on socket 0 was expanded by 1026MB 00:06:19.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.185 EAL: request: mp_malloc_sync 00:06:19.185 EAL: No shared files mode enabled, IPC is disabled 00:06:19.185 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:19.185 passed 00:06:19.185 00:06:19.185 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.185 suites 1 1 n/a 0 0 00:06:19.185 tests 2 2 2 0 0 00:06:19.185 asserts 497 497 497 0 n/a 00:06:19.185 00:06:19.185 Elapsed time = 0.977 seconds 00:06:19.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.185 EAL: request: mp_malloc_sync 00:06:19.185 EAL: No shared files mode enabled, IPC is disabled 00:06:19.185 EAL: Heap on socket 0 was shrunk by 2MB 00:06:19.185 EAL: No shared files mode enabled, IPC is disabled 00:06:19.185 EAL: No shared files mode enabled, IPC is disabled 00:06:19.185 EAL: No shared files mode enabled, IPC is disabled 00:06:19.185 00:06:19.185 real 0m1.097s 00:06:19.185 user 0m0.627s 00:06:19.185 sys 0m0.446s 00:06:19.185 15:39:14 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.185 15:39:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:19.185 ************************************ 00:06:19.185 END TEST env_vtophys 00:06:19.185 ************************************ 00:06:19.185 15:39:14 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:19.185 15:39:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.185 15:39:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.185 15:39:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:19.185 ************************************ 00:06:19.185 START TEST env_pci 00:06:19.185 ************************************ 00:06:19.186 15:39:14 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:19.444 00:06:19.444 00:06:19.444 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.444 http://cunit.sourceforge.net/ 00:06:19.444 00:06:19.444 00:06:19.444 Suite: pci 00:06:19.444 Test: pci_hook ...[2024-12-09 15:39:14.417725] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1118:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 893365 has claimed it 00:06:19.444 EAL: Cannot find device (10000:00:01.0) 00:06:19.444 EAL: Failed to attach device on primary process 00:06:19.444 passed 00:06:19.444 00:06:19.444 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.444 suites 1 1 n/a 0 0 00:06:19.444 tests 1 1 1 0 0 00:06:19.444 asserts 25 25 25 0 n/a 00:06:19.444 00:06:19.444 Elapsed time = 0.031 seconds 00:06:19.444 00:06:19.444 real 0m0.049s 00:06:19.444 user 0m0.010s 00:06:19.444 sys 0m0.039s 00:06:19.444 15:39:14 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.444 15:39:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:19.444 ************************************ 00:06:19.444 END TEST env_pci 00:06:19.444 ************************************ 00:06:19.444 15:39:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:19.444 15:39:14 env -- env/env.sh@15 -- # uname 00:06:19.444 15:39:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:19.444 15:39:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:19.444 15:39:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:19.444 15:39:14 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:19.444 15:39:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.444 15:39:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:19.444 ************************************ 00:06:19.444 START TEST env_dpdk_post_init 00:06:19.444 ************************************ 00:06:19.444 15:39:14 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:19.444 EAL: Detected CPU lcores: 72 00:06:19.444 EAL: Detected NUMA nodes: 2 00:06:19.444 EAL: Detected static linkage of DPDK 00:06:19.444 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:19.444 EAL: Selected IOVA mode 'VA' 00:06:19.444 EAL: VFIO support initialized 00:06:19.444 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:19.444 EAL: Using IOMMU type 1 (Type 1) 00:06:20.381 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:06:25.650 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:06:25.650 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001000000 00:06:25.908 Starting DPDK initialization... 00:06:25.908 Starting SPDK post initialization... 00:06:25.908 SPDK NVMe probe 00:06:25.908 Attaching to 0000:5e:00.0 00:06:25.908 Attached to 0000:5e:00.0 00:06:25.908 Cleaning up... 00:06:25.908 00:06:25.908 real 0m6.497s 00:06:25.908 user 0m4.741s 00:06:25.908 sys 0m1.010s 00:06:25.908 15:39:21 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.908 15:39:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:25.908 ************************************ 00:06:25.908 END TEST env_dpdk_post_init 00:06:25.908 ************************************ 00:06:25.908 15:39:21 env -- env/env.sh@26 -- # uname 00:06:25.908 15:39:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:25.908 15:39:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:25.908 15:39:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.908 15:39:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.908 15:39:21 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.908 ************************************ 00:06:25.908 START TEST env_mem_callbacks 00:06:25.908 ************************************ 00:06:25.909 15:39:21 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:25.909 EAL: Detected CPU lcores: 72 00:06:25.909 EAL: Detected NUMA nodes: 2 00:06:25.909 EAL: Detected static linkage of DPDK 00:06:25.909 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:26.168 EAL: Selected IOVA mode 'VA' 00:06:26.168 EAL: VFIO support initialized 00:06:26.168 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:26.168 00:06:26.168 00:06:26.168 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.168 http://cunit.sourceforge.net/ 00:06:26.168 00:06:26.168 00:06:26.168 Suite: memory 00:06:26.168 Test: test ... 00:06:26.168 register 0x200000200000 2097152 00:06:26.168 malloc 3145728 00:06:26.168 register 0x200000400000 4194304 00:06:26.168 buf 0x200000500000 len 3145728 PASSED 00:06:26.168 malloc 64 00:06:26.168 buf 0x2000004fff40 len 64 PASSED 00:06:26.168 malloc 4194304 00:06:26.168 register 0x200000800000 6291456 00:06:26.168 buf 0x200000a00000 len 4194304 PASSED 00:06:26.168 free 0x200000500000 3145728 00:06:26.168 free 0x2000004fff40 64 00:06:26.168 unregister 0x200000400000 4194304 PASSED 00:06:26.168 free 0x200000a00000 4194304 00:06:26.168 unregister 0x200000800000 6291456 PASSED 00:06:26.168 malloc 8388608 00:06:26.168 register 0x200000400000 10485760 00:06:26.168 buf 0x200000600000 len 8388608 PASSED 00:06:26.168 free 0x200000600000 8388608 00:06:26.168 unregister 0x200000400000 10485760 PASSED 00:06:26.168 passed 00:06:26.168 00:06:26.168 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.168 suites 1 1 n/a 0 0 00:06:26.168 tests 1 1 1 0 0 00:06:26.168 asserts 15 15 15 0 n/a 00:06:26.168 00:06:26.168 Elapsed time = 0.004 seconds 00:06:26.168 00:06:26.168 real 0m0.050s 00:06:26.168 user 0m0.011s 00:06:26.168 sys 0m0.039s 00:06:26.168 15:39:21 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.168 15:39:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:26.168 ************************************ 00:06:26.168 END TEST env_mem_callbacks 00:06:26.168 ************************************ 00:06:26.168 00:06:26.168 real 0m8.366s 00:06:26.168 user 0m5.713s 00:06:26.168 sys 0m1.923s 00:06:26.168 15:39:21 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.168 15:39:21 env -- common/autotest_common.sh@10 -- # set +x 00:06:26.168 ************************************ 00:06:26.168 END TEST env 00:06:26.168 ************************************ 00:06:26.168 15:39:21 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:26.168 15:39:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.168 15:39:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.168 15:39:21 -- common/autotest_common.sh@10 -- # set +x 00:06:26.168 ************************************ 00:06:26.168 START TEST rpc 00:06:26.168 ************************************ 00:06:26.168 15:39:21 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:26.168 * Looking for test storage... 00:06:26.168 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:26.168 15:39:21 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.168 15:39:21 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.168 15:39:21 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.427 15:39:21 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.427 15:39:21 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.427 15:39:21 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.427 15:39:21 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.427 15:39:21 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.427 15:39:21 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.427 15:39:21 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.427 15:39:21 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.427 15:39:21 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.427 15:39:21 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.427 15:39:21 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.427 15:39:21 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.427 15:39:21 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:26.427 15:39:21 rpc -- scripts/common.sh@345 -- # : 1 00:06:26.427 15:39:21 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.427 15:39:21 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.427 15:39:21 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:26.427 15:39:21 rpc -- scripts/common.sh@353 -- # local d=1 00:06:26.427 15:39:21 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.427 15:39:21 rpc -- scripts/common.sh@355 -- # echo 1 00:06:26.427 15:39:21 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.427 15:39:21 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:26.427 15:39:21 rpc -- scripts/common.sh@353 -- # local d=2 00:06:26.427 15:39:21 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.427 15:39:21 rpc -- scripts/common.sh@355 -- # echo 2 00:06:26.427 15:39:21 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.427 15:39:21 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.427 15:39:21 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.427 15:39:21 rpc -- scripts/common.sh@368 -- # return 0 00:06:26.427 15:39:21 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.427 15:39:21 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.427 --rc genhtml_branch_coverage=1 00:06:26.427 --rc genhtml_function_coverage=1 00:06:26.427 --rc genhtml_legend=1 00:06:26.427 --rc geninfo_all_blocks=1 00:06:26.427 --rc geninfo_unexecuted_blocks=1 00:06:26.427 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.427 ' 00:06:26.427 15:39:21 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.427 --rc genhtml_branch_coverage=1 00:06:26.427 --rc genhtml_function_coverage=1 00:06:26.427 --rc genhtml_legend=1 00:06:26.427 --rc geninfo_all_blocks=1 00:06:26.427 --rc geninfo_unexecuted_blocks=1 00:06:26.427 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.427 ' 00:06:26.427 15:39:21 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.427 --rc genhtml_branch_coverage=1 00:06:26.427 --rc genhtml_function_coverage=1 00:06:26.427 --rc genhtml_legend=1 00:06:26.427 --rc geninfo_all_blocks=1 00:06:26.427 --rc geninfo_unexecuted_blocks=1 00:06:26.427 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.427 ' 00:06:26.427 15:39:21 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.427 --rc genhtml_branch_coverage=1 00:06:26.427 --rc genhtml_function_coverage=1 00:06:26.427 --rc genhtml_legend=1 00:06:26.427 --rc geninfo_all_blocks=1 00:06:26.427 --rc geninfo_unexecuted_blocks=1 00:06:26.427 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.427 ' 00:06:26.427 15:39:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=894476 00:06:26.427 15:39:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.427 15:39:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 894476 00:06:26.427 15:39:21 rpc -- common/autotest_common.sh@835 -- # '[' -z 894476 ']' 00:06:26.427 15:39:21 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.427 15:39:21 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.427 15:39:21 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.427 15:39:21 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.427 15:39:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.427 15:39:21 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:26.427 [2024-12-09 15:39:21.489015] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:26.427 [2024-12-09 15:39:21.489078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894476 ] 00:06:26.427 [2024-12-09 15:39:21.560564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.427 [2024-12-09 15:39:21.608319] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:26.427 [2024-12-09 15:39:21.608358] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 894476' to capture a snapshot of events at runtime. 00:06:26.427 [2024-12-09 15:39:21.608368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:26.427 [2024-12-09 15:39:21.608377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:26.427 [2024-12-09 15:39:21.608384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid894476 for offline analysis/debug. 00:06:26.427 [2024-12-09 15:39:21.608776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.686 15:39:21 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.686 15:39:21 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:26.686 15:39:21 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:26.686 15:39:21 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:26.686 15:39:21 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:26.686 15:39:21 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:26.686 15:39:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.686 15:39:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.686 15:39:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.686 ************************************ 00:06:26.686 START TEST rpc_integrity 00:06:26.686 ************************************ 00:06:26.686 15:39:21 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:26.686 15:39:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:26.686 15:39:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.686 15:39:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.686 15:39:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.686 15:39:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:26.686 15:39:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:26.686 15:39:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:26.686 15:39:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:26.945 15:39:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.945 15:39:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.945 15:39:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.945 15:39:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:26.946 15:39:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:26.946 15:39:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.946 15:39:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.946 15:39:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.946 15:39:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:26.946 { 00:06:26.946 "name": "Malloc0", 00:06:26.946 "aliases": [ 00:06:26.946 "5ce55941-2a5a-4113-b5d3-ba46458953e4" 00:06:26.946 ], 00:06:26.946 "product_name": "Malloc disk", 00:06:26.946 "block_size": 512, 00:06:26.946 "num_blocks": 16384, 00:06:26.946 "uuid": "5ce55941-2a5a-4113-b5d3-ba46458953e4", 00:06:26.946 "assigned_rate_limits": { 00:06:26.946 "rw_ios_per_sec": 0, 00:06:26.946 "rw_mbytes_per_sec": 0, 00:06:26.946 "r_mbytes_per_sec": 0, 00:06:26.946 "w_mbytes_per_sec": 0 00:06:26.946 }, 00:06:26.946 "claimed": false, 00:06:26.946 "zoned": false, 00:06:26.946 "supported_io_types": { 00:06:26.946 "read": true, 00:06:26.946 "write": true, 00:06:26.946 "unmap": true, 00:06:26.946 "flush": true, 00:06:26.946 "reset": true, 00:06:26.946 "nvme_admin": false, 00:06:26.946 "nvme_io": false, 00:06:26.946 "nvme_io_md": false, 00:06:26.946 "write_zeroes": true, 00:06:26.946 "zcopy": true, 00:06:26.946 "get_zone_info": false, 00:06:26.946 "zone_management": false, 00:06:26.946 "zone_append": false, 00:06:26.946 "compare": false, 00:06:26.946 "compare_and_write": false, 00:06:26.946 "abort": true, 00:06:26.946 "seek_hole": false, 00:06:26.946 "seek_data": false, 00:06:26.946 "copy": true, 00:06:26.946 "nvme_iov_md": false 00:06:26.946 }, 00:06:26.946 "memory_domains": [ 00:06:26.946 { 00:06:26.946 "dma_device_id": "system", 00:06:26.946 "dma_device_type": 1 00:06:26.946 }, 00:06:26.946 { 00:06:26.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.946 "dma_device_type": 2 00:06:26.946 } 00:06:26.946 ], 00:06:26.946 "driver_specific": {} 00:06:26.946 } 00:06:26.946 ]' 00:06:26.946 15:39:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:26.946 15:39:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:26.946 15:39:21 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:26.946 15:39:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.946 15:39:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.946 [2024-12-09 15:39:21.984136] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:26.946 [2024-12-09 15:39:21.984169] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:26.946 [2024-12-09 15:39:21.984184] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4a7b190 00:06:26.946 [2024-12-09 15:39:21.984194] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:26.946 [2024-12-09 15:39:21.985068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:26.946 [2024-12-09 15:39:21.985091] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:26.946 Passthru0 00:06:26.946 15:39:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.946 15:39:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:26.946 15:39:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.946 15:39:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.946 15:39:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.946 15:39:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:26.946 { 00:06:26.946 "name": "Malloc0", 00:06:26.946 "aliases": [ 00:06:26.946 "5ce55941-2a5a-4113-b5d3-ba46458953e4" 00:06:26.946 ], 00:06:26.946 "product_name": "Malloc disk", 00:06:26.946 "block_size": 512, 00:06:26.946 "num_blocks": 16384, 00:06:26.946 "uuid": "5ce55941-2a5a-4113-b5d3-ba46458953e4", 00:06:26.946 "assigned_rate_limits": { 00:06:26.946 "rw_ios_per_sec": 0, 00:06:26.946 "rw_mbytes_per_sec": 0, 00:06:26.946 "r_mbytes_per_sec": 0, 00:06:26.946 "w_mbytes_per_sec": 0 00:06:26.946 }, 00:06:26.946 "claimed": true, 00:06:26.946 "claim_type": "exclusive_write", 00:06:26.946 "zoned": false, 00:06:26.946 "supported_io_types": { 00:06:26.946 "read": true, 00:06:26.946 "write": true, 00:06:26.946 "unmap": true, 00:06:26.946 "flush": true, 00:06:26.946 "reset": true, 00:06:26.946 "nvme_admin": false, 00:06:26.946 "nvme_io": false, 00:06:26.946 "nvme_io_md": false, 00:06:26.946 "write_zeroes": true, 00:06:26.946 "zcopy": true, 00:06:26.946 "get_zone_info": false, 00:06:26.946 "zone_management": false, 00:06:26.946 "zone_append": false, 00:06:26.946 "compare": false, 00:06:26.946 "compare_and_write": false, 00:06:26.946 "abort": true, 00:06:26.946 "seek_hole": false, 00:06:26.946 "seek_data": false, 00:06:26.946 "copy": true, 00:06:26.946 "nvme_iov_md": false 00:06:26.946 }, 00:06:26.946 "memory_domains": [ 00:06:26.946 { 00:06:26.946 "dma_device_id": "system", 00:06:26.946 "dma_device_type": 1 00:06:26.946 }, 00:06:26.946 { 00:06:26.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.946 "dma_device_type": 2 00:06:26.946 } 00:06:26.946 ], 00:06:26.946 "driver_specific": {} 00:06:26.946 }, 00:06:26.946 { 00:06:26.946 "name": "Passthru0", 00:06:26.946 "aliases": [ 00:06:26.946 "4f5ccfe8-cc46-5440-9c98-65a84a422f02" 00:06:26.946 ], 00:06:26.946 "product_name": "passthru", 00:06:26.946 "block_size": 512, 00:06:26.946 "num_blocks": 16384, 00:06:26.946 "uuid": "4f5ccfe8-cc46-5440-9c98-65a84a422f02", 00:06:26.946 "assigned_rate_limits": { 00:06:26.946 "rw_ios_per_sec": 0, 00:06:26.946 "rw_mbytes_per_sec": 0, 00:06:26.946 "r_mbytes_per_sec": 0, 00:06:26.946 "w_mbytes_per_sec": 0 00:06:26.946 }, 00:06:26.946 "claimed": false, 00:06:26.946 "zoned": false, 00:06:26.946 "supported_io_types": { 00:06:26.946 "read": true, 00:06:26.946 "write": true, 00:06:26.946 "unmap": true, 00:06:26.946 "flush": true, 00:06:26.946 "reset": true, 00:06:26.946 "nvme_admin": false, 00:06:26.946 "nvme_io": false, 00:06:26.946 "nvme_io_md": false, 00:06:26.946 "write_zeroes": true, 00:06:26.946 "zcopy": true, 00:06:26.946 "get_zone_info": false, 00:06:26.946 "zone_management": false, 00:06:26.946 "zone_append": false, 00:06:26.946 "compare": false, 00:06:26.946 "compare_and_write": false, 00:06:26.946 "abort": true, 00:06:26.946 "seek_hole": false, 00:06:26.946 "seek_data": false, 00:06:26.946 "copy": true, 00:06:26.946 "nvme_iov_md": false 00:06:26.946 }, 00:06:26.946 "memory_domains": [ 00:06:26.946 { 00:06:26.946 "dma_device_id": "system", 00:06:26.946 "dma_device_type": 1 00:06:26.946 }, 00:06:26.946 { 00:06:26.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.946 "dma_device_type": 2 00:06:26.946 } 00:06:26.946 ], 00:06:26.946 "driver_specific": { 00:06:26.946 "passthru": { 00:06:26.946 "name": "Passthru0", 00:06:26.946 "base_bdev_name": "Malloc0" 00:06:26.946 } 00:06:26.946 } 00:06:26.946 } 00:06:26.946 ]' 00:06:26.946 15:39:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:26.946 15:39:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:26.946 15:39:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:26.946 15:39:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.946 15:39:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.946 15:39:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.946 15:39:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:26.946 15:39:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.946 15:39:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.946 15:39:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.946 15:39:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:26.946 15:39:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.946 15:39:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.946 15:39:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.946 15:39:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:26.946 15:39:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:26.946 15:39:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:26.946 00:06:26.946 real 0m0.256s 00:06:26.946 user 0m0.152s 00:06:26.946 sys 0m0.039s 00:06:26.946 15:39:22 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.946 15:39:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.946 ************************************ 00:06:26.946 END TEST rpc_integrity 00:06:26.946 ************************************ 00:06:26.946 15:39:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:26.946 15:39:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.946 15:39:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.946 15:39:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.205 ************************************ 00:06:27.205 START TEST rpc_plugins 00:06:27.205 ************************************ 00:06:27.205 15:39:22 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:27.205 15:39:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:27.205 15:39:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.205 15:39:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:27.205 15:39:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.205 15:39:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:27.205 15:39:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:27.205 15:39:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.205 15:39:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:27.205 15:39:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.205 15:39:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:27.205 { 00:06:27.205 "name": "Malloc1", 00:06:27.205 "aliases": [ 00:06:27.205 "16553b44-9e50-4d53-a47c-b179d24e0fe9" 00:06:27.205 ], 00:06:27.205 "product_name": "Malloc disk", 00:06:27.205 "block_size": 4096, 00:06:27.205 "num_blocks": 256, 00:06:27.205 "uuid": "16553b44-9e50-4d53-a47c-b179d24e0fe9", 00:06:27.205 "assigned_rate_limits": { 00:06:27.205 "rw_ios_per_sec": 0, 00:06:27.205 "rw_mbytes_per_sec": 0, 00:06:27.205 "r_mbytes_per_sec": 0, 00:06:27.205 "w_mbytes_per_sec": 0 00:06:27.205 }, 00:06:27.205 "claimed": false, 00:06:27.205 "zoned": false, 00:06:27.205 "supported_io_types": { 00:06:27.205 "read": true, 00:06:27.205 "write": true, 00:06:27.205 "unmap": true, 00:06:27.205 "flush": true, 00:06:27.205 "reset": true, 00:06:27.205 "nvme_admin": false, 00:06:27.205 "nvme_io": false, 00:06:27.205 "nvme_io_md": false, 00:06:27.205 "write_zeroes": true, 00:06:27.205 "zcopy": true, 00:06:27.205 "get_zone_info": false, 00:06:27.205 "zone_management": false, 00:06:27.205 "zone_append": false, 00:06:27.205 "compare": false, 00:06:27.205 "compare_and_write": false, 00:06:27.205 "abort": true, 00:06:27.205 "seek_hole": false, 00:06:27.205 "seek_data": false, 00:06:27.205 "copy": true, 00:06:27.205 "nvme_iov_md": false 00:06:27.205 }, 00:06:27.205 "memory_domains": [ 00:06:27.205 { 00:06:27.205 "dma_device_id": "system", 00:06:27.205 "dma_device_type": 1 00:06:27.205 }, 00:06:27.205 { 00:06:27.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.205 "dma_device_type": 2 00:06:27.205 } 00:06:27.205 ], 00:06:27.205 "driver_specific": {} 00:06:27.206 } 00:06:27.206 ]' 00:06:27.206 15:39:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:27.206 15:39:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:27.206 15:39:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:27.206 15:39:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.206 15:39:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:27.206 15:39:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.206 15:39:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:27.206 15:39:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.206 15:39:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:27.206 15:39:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.206 15:39:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:27.206 15:39:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:27.206 15:39:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:27.206 00:06:27.206 real 0m0.125s 00:06:27.206 user 0m0.077s 00:06:27.206 sys 0m0.016s 00:06:27.206 15:39:22 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.206 15:39:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:27.206 ************************************ 00:06:27.206 END TEST rpc_plugins 00:06:27.206 ************************************ 00:06:27.206 15:39:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:27.206 15:39:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.206 15:39:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.206 15:39:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.206 ************************************ 00:06:27.206 START TEST rpc_trace_cmd_test 00:06:27.206 ************************************ 00:06:27.206 15:39:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:27.206 15:39:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:27.206 15:39:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:27.206 15:39:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.206 15:39:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.206 15:39:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.206 15:39:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:27.206 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid894476", 00:06:27.206 "tpoint_group_mask": "0x8", 00:06:27.206 "iscsi_conn": { 00:06:27.206 "mask": "0x2", 00:06:27.206 "tpoint_mask": "0x0" 00:06:27.206 }, 00:06:27.206 "scsi": { 00:06:27.206 "mask": "0x4", 00:06:27.206 "tpoint_mask": "0x0" 00:06:27.206 }, 00:06:27.206 "bdev": { 00:06:27.206 "mask": "0x8", 00:06:27.206 "tpoint_mask": "0xffffffffffffffff" 00:06:27.206 }, 00:06:27.206 "nvmf_rdma": { 00:06:27.206 "mask": "0x10", 00:06:27.206 "tpoint_mask": "0x0" 00:06:27.206 }, 00:06:27.206 "nvmf_tcp": { 00:06:27.206 "mask": "0x20", 00:06:27.206 "tpoint_mask": "0x0" 00:06:27.206 }, 00:06:27.206 "ftl": { 00:06:27.206 "mask": "0x40", 00:06:27.206 "tpoint_mask": "0x0" 00:06:27.206 }, 00:06:27.206 "blobfs": { 00:06:27.206 "mask": "0x80", 00:06:27.206 "tpoint_mask": "0x0" 00:06:27.206 }, 00:06:27.206 "dsa": { 00:06:27.206 "mask": "0x200", 00:06:27.206 "tpoint_mask": "0x0" 00:06:27.206 }, 00:06:27.206 "thread": { 00:06:27.206 "mask": "0x400", 00:06:27.206 "tpoint_mask": "0x0" 00:06:27.206 }, 00:06:27.206 "nvme_pcie": { 00:06:27.206 "mask": "0x800", 00:06:27.206 "tpoint_mask": "0x0" 00:06:27.206 }, 00:06:27.206 "iaa": { 00:06:27.206 "mask": "0x1000", 00:06:27.206 "tpoint_mask": "0x0" 00:06:27.206 }, 00:06:27.206 "nvme_tcp": { 00:06:27.206 "mask": "0x2000", 00:06:27.206 "tpoint_mask": "0x0" 00:06:27.206 }, 00:06:27.206 "bdev_nvme": { 00:06:27.206 "mask": "0x4000", 00:06:27.206 "tpoint_mask": "0x0" 00:06:27.206 }, 00:06:27.206 "sock": { 00:06:27.206 "mask": "0x8000", 00:06:27.206 "tpoint_mask": "0x0" 00:06:27.206 }, 00:06:27.206 "blob": { 00:06:27.206 "mask": "0x10000", 00:06:27.206 "tpoint_mask": "0x0" 00:06:27.206 }, 00:06:27.206 "bdev_raid": { 00:06:27.206 "mask": "0x20000", 00:06:27.206 "tpoint_mask": "0x0" 00:06:27.206 }, 00:06:27.206 "scheduler": { 00:06:27.206 "mask": "0x40000", 00:06:27.206 "tpoint_mask": "0x0" 00:06:27.206 } 00:06:27.206 }' 00:06:27.206 15:39:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:27.465 15:39:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:27.465 15:39:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:27.465 15:39:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:27.465 15:39:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:27.465 15:39:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:27.465 15:39:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:27.465 15:39:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:27.465 15:39:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:27.465 15:39:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:27.465 00:06:27.465 real 0m0.234s 00:06:27.465 user 0m0.188s 00:06:27.465 sys 0m0.036s 00:06:27.465 15:39:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.465 15:39:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.465 ************************************ 00:06:27.465 END TEST rpc_trace_cmd_test 00:06:27.465 ************************************ 00:06:27.465 15:39:22 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:27.465 15:39:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:27.465 15:39:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:27.465 15:39:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.465 15:39:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.465 15:39:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.465 ************************************ 00:06:27.465 START TEST rpc_daemon_integrity 00:06:27.465 ************************************ 00:06:27.465 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:27.465 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:27.465 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.465 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.465 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.465 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:27.465 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:27.723 { 00:06:27.723 "name": "Malloc2", 00:06:27.723 "aliases": [ 00:06:27.723 "e607dabd-f4bd-4462-a8dd-787bfad24a9e" 00:06:27.723 ], 00:06:27.723 "product_name": "Malloc disk", 00:06:27.723 "block_size": 512, 00:06:27.723 "num_blocks": 16384, 00:06:27.723 "uuid": "e607dabd-f4bd-4462-a8dd-787bfad24a9e", 00:06:27.723 "assigned_rate_limits": { 00:06:27.723 "rw_ios_per_sec": 0, 00:06:27.723 "rw_mbytes_per_sec": 0, 00:06:27.723 "r_mbytes_per_sec": 0, 00:06:27.723 "w_mbytes_per_sec": 0 00:06:27.723 }, 00:06:27.723 "claimed": false, 00:06:27.723 "zoned": false, 00:06:27.723 "supported_io_types": { 00:06:27.723 "read": true, 00:06:27.723 "write": true, 00:06:27.723 "unmap": true, 00:06:27.723 "flush": true, 00:06:27.723 "reset": true, 00:06:27.723 "nvme_admin": false, 00:06:27.723 "nvme_io": false, 00:06:27.723 "nvme_io_md": false, 00:06:27.723 "write_zeroes": true, 00:06:27.723 "zcopy": true, 00:06:27.723 "get_zone_info": false, 00:06:27.723 "zone_management": false, 00:06:27.723 "zone_append": false, 00:06:27.723 "compare": false, 00:06:27.723 "compare_and_write": false, 00:06:27.723 "abort": true, 00:06:27.723 "seek_hole": false, 00:06:27.723 "seek_data": false, 00:06:27.723 "copy": true, 00:06:27.723 "nvme_iov_md": false 00:06:27.723 }, 00:06:27.723 "memory_domains": [ 00:06:27.723 { 00:06:27.723 "dma_device_id": "system", 00:06:27.723 "dma_device_type": 1 00:06:27.723 }, 00:06:27.723 { 00:06:27.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.723 "dma_device_type": 2 00:06:27.723 } 00:06:27.723 ], 00:06:27.723 "driver_specific": {} 00:06:27.723 } 00:06:27.723 ]' 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.723 [2024-12-09 15:39:22.802323] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:27.723 [2024-12-09 15:39:22.802355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:27.723 [2024-12-09 15:39:22.802374] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4ba8c30 00:06:27.723 [2024-12-09 15:39:22.802383] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:27.723 [2024-12-09 15:39:22.803116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:27.723 [2024-12-09 15:39:22.803140] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:27.723 Passthru0 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.723 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:27.723 { 00:06:27.723 "name": "Malloc2", 00:06:27.723 "aliases": [ 00:06:27.723 "e607dabd-f4bd-4462-a8dd-787bfad24a9e" 00:06:27.723 ], 00:06:27.723 "product_name": "Malloc disk", 00:06:27.723 "block_size": 512, 00:06:27.723 "num_blocks": 16384, 00:06:27.723 "uuid": "e607dabd-f4bd-4462-a8dd-787bfad24a9e", 00:06:27.723 "assigned_rate_limits": { 00:06:27.723 "rw_ios_per_sec": 0, 00:06:27.723 "rw_mbytes_per_sec": 0, 00:06:27.723 "r_mbytes_per_sec": 0, 00:06:27.723 "w_mbytes_per_sec": 0 00:06:27.723 }, 00:06:27.723 "claimed": true, 00:06:27.723 "claim_type": "exclusive_write", 00:06:27.723 "zoned": false, 00:06:27.723 "supported_io_types": { 00:06:27.723 "read": true, 00:06:27.723 "write": true, 00:06:27.723 "unmap": true, 00:06:27.723 "flush": true, 00:06:27.723 "reset": true, 00:06:27.723 "nvme_admin": false, 00:06:27.723 "nvme_io": false, 00:06:27.723 "nvme_io_md": false, 00:06:27.723 "write_zeroes": true, 00:06:27.723 "zcopy": true, 00:06:27.723 "get_zone_info": false, 00:06:27.723 "zone_management": false, 00:06:27.723 "zone_append": false, 00:06:27.723 "compare": false, 00:06:27.723 "compare_and_write": false, 00:06:27.723 "abort": true, 00:06:27.723 "seek_hole": false, 00:06:27.723 "seek_data": false, 00:06:27.723 "copy": true, 00:06:27.723 "nvme_iov_md": false 00:06:27.723 }, 00:06:27.723 "memory_domains": [ 00:06:27.723 { 00:06:27.723 "dma_device_id": "system", 00:06:27.723 "dma_device_type": 1 00:06:27.723 }, 00:06:27.723 { 00:06:27.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.723 "dma_device_type": 2 00:06:27.723 } 00:06:27.723 ], 00:06:27.723 "driver_specific": {} 00:06:27.723 }, 00:06:27.723 { 00:06:27.723 "name": "Passthru0", 00:06:27.723 "aliases": [ 00:06:27.723 "96a12607-c3cc-5f3a-a744-d33842e61e84" 00:06:27.723 ], 00:06:27.723 "product_name": "passthru", 00:06:27.723 "block_size": 512, 00:06:27.723 "num_blocks": 16384, 00:06:27.723 "uuid": "96a12607-c3cc-5f3a-a744-d33842e61e84", 00:06:27.723 "assigned_rate_limits": { 00:06:27.723 "rw_ios_per_sec": 0, 00:06:27.723 "rw_mbytes_per_sec": 0, 00:06:27.723 "r_mbytes_per_sec": 0, 00:06:27.723 "w_mbytes_per_sec": 0 00:06:27.723 }, 00:06:27.723 "claimed": false, 00:06:27.723 "zoned": false, 00:06:27.723 "supported_io_types": { 00:06:27.724 "read": true, 00:06:27.724 "write": true, 00:06:27.724 "unmap": true, 00:06:27.724 "flush": true, 00:06:27.724 "reset": true, 00:06:27.724 "nvme_admin": false, 00:06:27.724 "nvme_io": false, 00:06:27.724 "nvme_io_md": false, 00:06:27.724 "write_zeroes": true, 00:06:27.724 "zcopy": true, 00:06:27.724 "get_zone_info": false, 00:06:27.724 "zone_management": false, 00:06:27.724 "zone_append": false, 00:06:27.724 "compare": false, 00:06:27.724 "compare_and_write": false, 00:06:27.724 "abort": true, 00:06:27.724 "seek_hole": false, 00:06:27.724 "seek_data": false, 00:06:27.724 "copy": true, 00:06:27.724 "nvme_iov_md": false 00:06:27.724 }, 00:06:27.724 "memory_domains": [ 00:06:27.724 { 00:06:27.724 "dma_device_id": "system", 00:06:27.724 "dma_device_type": 1 00:06:27.724 }, 00:06:27.724 { 00:06:27.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.724 "dma_device_type": 2 00:06:27.724 } 00:06:27.724 ], 00:06:27.724 "driver_specific": { 00:06:27.724 "passthru": { 00:06:27.724 "name": "Passthru0", 00:06:27.724 "base_bdev_name": "Malloc2" 00:06:27.724 } 00:06:27.724 } 00:06:27.724 } 00:06:27.724 ]' 00:06:27.724 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:27.724 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:27.724 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:27.724 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.724 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.724 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.724 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:27.724 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.724 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.724 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.724 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:27.724 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.724 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.724 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.724 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:27.724 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:27.982 15:39:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:27.982 00:06:27.982 real 0m0.278s 00:06:27.982 user 0m0.174s 00:06:27.982 sys 0m0.040s 00:06:27.982 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.982 15:39:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.982 ************************************ 00:06:27.982 END TEST rpc_daemon_integrity 00:06:27.982 ************************************ 00:06:27.982 15:39:22 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:27.982 15:39:22 rpc -- rpc/rpc.sh@84 -- # killprocess 894476 00:06:27.982 15:39:22 rpc -- common/autotest_common.sh@954 -- # '[' -z 894476 ']' 00:06:27.982 15:39:22 rpc -- common/autotest_common.sh@958 -- # kill -0 894476 00:06:27.982 15:39:22 rpc -- common/autotest_common.sh@959 -- # uname 00:06:27.982 15:39:22 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.982 15:39:23 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 894476 00:06:27.982 15:39:23 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.982 15:39:23 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.982 15:39:23 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 894476' 00:06:27.982 killing process with pid 894476 00:06:27.982 15:39:23 rpc -- common/autotest_common.sh@973 -- # kill 894476 00:06:27.982 15:39:23 rpc -- common/autotest_common.sh@978 -- # wait 894476 00:06:28.241 00:06:28.241 real 0m2.063s 00:06:28.241 user 0m2.585s 00:06:28.241 sys 0m0.758s 00:06:28.241 15:39:23 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.241 15:39:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.241 ************************************ 00:06:28.241 END TEST rpc 00:06:28.241 ************************************ 00:06:28.241 15:39:23 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:28.241 15:39:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.241 15:39:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.241 15:39:23 -- common/autotest_common.sh@10 -- # set +x 00:06:28.241 ************************************ 00:06:28.241 START TEST skip_rpc 00:06:28.241 ************************************ 00:06:28.241 15:39:23 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:28.500 * Looking for test storage... 00:06:28.500 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:28.500 15:39:23 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.500 15:39:23 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.500 15:39:23 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:28.500 15:39:23 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.500 15:39:23 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:28.500 15:39:23 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.500 15:39:23 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:28.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.500 --rc genhtml_branch_coverage=1 00:06:28.500 --rc genhtml_function_coverage=1 00:06:28.500 --rc genhtml_legend=1 00:06:28.500 --rc geninfo_all_blocks=1 00:06:28.500 --rc geninfo_unexecuted_blocks=1 00:06:28.500 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.500 ' 00:06:28.500 15:39:23 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:28.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.500 --rc genhtml_branch_coverage=1 00:06:28.500 --rc genhtml_function_coverage=1 00:06:28.500 --rc genhtml_legend=1 00:06:28.500 --rc geninfo_all_blocks=1 00:06:28.500 --rc geninfo_unexecuted_blocks=1 00:06:28.500 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.500 ' 00:06:28.500 15:39:23 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:28.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.500 --rc genhtml_branch_coverage=1 00:06:28.500 --rc genhtml_function_coverage=1 00:06:28.500 --rc genhtml_legend=1 00:06:28.500 --rc geninfo_all_blocks=1 00:06:28.500 --rc geninfo_unexecuted_blocks=1 00:06:28.500 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.500 ' 00:06:28.500 15:39:23 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:28.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.500 --rc genhtml_branch_coverage=1 00:06:28.500 --rc genhtml_function_coverage=1 00:06:28.500 --rc genhtml_legend=1 00:06:28.500 --rc geninfo_all_blocks=1 00:06:28.500 --rc geninfo_unexecuted_blocks=1 00:06:28.500 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.500 ' 00:06:28.500 15:39:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:28.500 15:39:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:28.500 15:39:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:28.500 15:39:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.500 15:39:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.500 15:39:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.500 ************************************ 00:06:28.501 START TEST skip_rpc 00:06:28.501 ************************************ 00:06:28.501 15:39:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:28.501 15:39:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=894849 00:06:28.501 15:39:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.501 15:39:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:28.501 15:39:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:28.501 [2024-12-09 15:39:23.674461] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:28.501 [2024-12-09 15:39:23.674541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894849 ] 00:06:28.759 [2024-12-09 15:39:23.746616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.759 [2024-12-09 15:39:23.791118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 894849 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 894849 ']' 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 894849 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 894849 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 894849' 00:06:34.022 killing process with pid 894849 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 894849 00:06:34.022 15:39:28 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 894849 00:06:34.022 00:06:34.022 real 0m5.365s 00:06:34.022 user 0m5.129s 00:06:34.022 sys 0m0.275s 00:06:34.022 15:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.022 15:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.022 ************************************ 00:06:34.022 END TEST skip_rpc 00:06:34.022 ************************************ 00:06:34.022 15:39:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:34.022 15:39:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.022 15:39:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.022 15:39:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.022 ************************************ 00:06:34.022 START TEST skip_rpc_with_json 00:06:34.022 ************************************ 00:06:34.022 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:34.022 15:39:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:34.022 15:39:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=895654 00:06:34.022 15:39:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.022 15:39:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.022 15:39:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 895654 00:06:34.022 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 895654 ']' 00:06:34.022 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.022 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.022 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.022 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.022 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:34.022 [2024-12-09 15:39:29.125136] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:34.022 [2024-12-09 15:39:29.125204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid895654 ] 00:06:34.022 [2024-12-09 15:39:29.193003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.022 [2024-12-09 15:39:29.237089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.281 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.281 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:34.281 15:39:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:34.281 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.281 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:34.281 [2024-12-09 15:39:29.459204] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:34.281 request: 00:06:34.281 { 00:06:34.281 "trtype": "tcp", 00:06:34.281 "method": "nvmf_get_transports", 00:06:34.281 "req_id": 1 00:06:34.281 } 00:06:34.281 Got JSON-RPC error response 00:06:34.281 response: 00:06:34.281 { 00:06:34.281 "code": -19, 00:06:34.281 "message": "No such device" 00:06:34.281 } 00:06:34.281 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:34.281 15:39:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:34.281 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.281 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:34.281 [2024-12-09 15:39:29.467294] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.281 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.281 15:39:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:34.281 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.281 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:34.540 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.540 15:39:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:34.540 { 00:06:34.540 "subsystems": [ 00:06:34.540 { 00:06:34.540 "subsystem": "scheduler", 00:06:34.540 "config": [ 00:06:34.540 { 00:06:34.540 "method": "framework_set_scheduler", 00:06:34.540 "params": { 00:06:34.540 "name": "static" 00:06:34.540 } 00:06:34.540 } 00:06:34.540 ] 00:06:34.540 }, 00:06:34.540 { 00:06:34.540 "subsystem": "vmd", 00:06:34.540 "config": [] 00:06:34.540 }, 00:06:34.540 { 00:06:34.540 "subsystem": "sock", 00:06:34.540 "config": [ 00:06:34.540 { 00:06:34.540 "method": "sock_set_default_impl", 00:06:34.540 "params": { 00:06:34.540 "impl_name": "posix" 00:06:34.540 } 00:06:34.540 }, 00:06:34.540 { 00:06:34.540 "method": "sock_impl_set_options", 00:06:34.540 "params": { 00:06:34.540 "impl_name": "ssl", 00:06:34.540 "recv_buf_size": 4096, 00:06:34.540 "send_buf_size": 4096, 00:06:34.540 "enable_recv_pipe": true, 00:06:34.540 "enable_quickack": false, 00:06:34.540 "enable_placement_id": 0, 00:06:34.540 "enable_zerocopy_send_server": true, 00:06:34.540 "enable_zerocopy_send_client": false, 00:06:34.540 "zerocopy_threshold": 0, 00:06:34.540 "tls_version": 0, 00:06:34.540 "enable_ktls": false 00:06:34.540 } 00:06:34.540 }, 00:06:34.540 { 00:06:34.540 "method": "sock_impl_set_options", 00:06:34.540 "params": { 00:06:34.540 "impl_name": "posix", 00:06:34.540 "recv_buf_size": 2097152, 00:06:34.540 "send_buf_size": 2097152, 00:06:34.540 "enable_recv_pipe": true, 00:06:34.540 "enable_quickack": false, 00:06:34.540 "enable_placement_id": 0, 00:06:34.540 "enable_zerocopy_send_server": true, 00:06:34.540 "enable_zerocopy_send_client": false, 00:06:34.540 "zerocopy_threshold": 0, 00:06:34.540 "tls_version": 0, 00:06:34.540 "enable_ktls": false 00:06:34.540 } 00:06:34.540 } 00:06:34.540 ] 00:06:34.540 }, 00:06:34.540 { 00:06:34.540 "subsystem": "iobuf", 00:06:34.540 "config": [ 00:06:34.540 { 00:06:34.540 "method": "iobuf_set_options", 00:06:34.540 "params": { 00:06:34.540 "small_pool_count": 8192, 00:06:34.540 "large_pool_count": 1024, 00:06:34.540 "small_bufsize": 8192, 00:06:34.540 "large_bufsize": 135168, 00:06:34.540 "enable_numa": false 00:06:34.540 } 00:06:34.540 } 00:06:34.540 ] 00:06:34.540 }, 00:06:34.540 { 00:06:34.540 "subsystem": "keyring", 00:06:34.540 "config": [] 00:06:34.540 }, 00:06:34.540 { 00:06:34.540 "subsystem": "vfio_user_target", 00:06:34.540 "config": null 00:06:34.540 }, 00:06:34.540 { 00:06:34.540 "subsystem": "fsdev", 00:06:34.540 "config": [ 00:06:34.540 { 00:06:34.541 "method": "fsdev_set_opts", 00:06:34.541 "params": { 00:06:34.541 "fsdev_io_pool_size": 65535, 00:06:34.541 "fsdev_io_cache_size": 256 00:06:34.541 } 00:06:34.541 } 00:06:34.541 ] 00:06:34.541 }, 00:06:34.541 { 00:06:34.541 "subsystem": "accel", 00:06:34.541 "config": [ 00:06:34.541 { 00:06:34.541 "method": "accel_set_options", 00:06:34.541 "params": { 00:06:34.541 "small_cache_size": 128, 00:06:34.541 "large_cache_size": 16, 00:06:34.541 "task_count": 2048, 00:06:34.541 "sequence_count": 2048, 00:06:34.541 "buf_count": 2048 00:06:34.541 } 00:06:34.541 } 00:06:34.541 ] 00:06:34.541 }, 00:06:34.541 { 00:06:34.541 "subsystem": "bdev", 00:06:34.541 "config": [ 00:06:34.541 { 00:06:34.541 "method": "bdev_set_options", 00:06:34.541 "params": { 00:06:34.541 "bdev_io_pool_size": 65535, 00:06:34.541 "bdev_io_cache_size": 256, 00:06:34.541 "bdev_auto_examine": true, 00:06:34.541 "iobuf_small_cache_size": 128, 00:06:34.541 "iobuf_large_cache_size": 16 00:06:34.541 } 00:06:34.541 }, 00:06:34.541 { 00:06:34.541 "method": "bdev_raid_set_options", 00:06:34.541 "params": { 00:06:34.541 "process_window_size_kb": 1024, 00:06:34.541 "process_max_bandwidth_mb_sec": 0 00:06:34.541 } 00:06:34.541 }, 00:06:34.541 { 00:06:34.541 "method": "bdev_nvme_set_options", 00:06:34.541 "params": { 00:06:34.541 "action_on_timeout": "none", 00:06:34.541 "timeout_us": 0, 00:06:34.541 "timeout_admin_us": 0, 00:06:34.541 "keep_alive_timeout_ms": 10000, 00:06:34.541 "arbitration_burst": 0, 00:06:34.541 "low_priority_weight": 0, 00:06:34.541 "medium_priority_weight": 0, 00:06:34.541 "high_priority_weight": 0, 00:06:34.541 "nvme_adminq_poll_period_us": 10000, 00:06:34.541 "nvme_ioq_poll_period_us": 0, 00:06:34.541 "io_queue_requests": 0, 00:06:34.541 "delay_cmd_submit": true, 00:06:34.541 "transport_retry_count": 4, 00:06:34.541 "bdev_retry_count": 3, 00:06:34.541 "transport_ack_timeout": 0, 00:06:34.541 "ctrlr_loss_timeout_sec": 0, 00:06:34.541 "reconnect_delay_sec": 0, 00:06:34.541 "fast_io_fail_timeout_sec": 0, 00:06:34.541 "disable_auto_failback": false, 00:06:34.541 "generate_uuids": false, 00:06:34.541 "transport_tos": 0, 00:06:34.541 "nvme_error_stat": false, 00:06:34.541 "rdma_srq_size": 0, 00:06:34.541 "io_path_stat": false, 00:06:34.541 "allow_accel_sequence": false, 00:06:34.541 "rdma_max_cq_size": 0, 00:06:34.541 "rdma_cm_event_timeout_ms": 0, 00:06:34.541 "dhchap_digests": [ 00:06:34.541 "sha256", 00:06:34.541 "sha384", 00:06:34.541 "sha512" 00:06:34.541 ], 00:06:34.541 "dhchap_dhgroups": [ 00:06:34.541 "null", 00:06:34.541 "ffdhe2048", 00:06:34.541 "ffdhe3072", 00:06:34.541 "ffdhe4096", 00:06:34.541 "ffdhe6144", 00:06:34.541 "ffdhe8192" 00:06:34.541 ] 00:06:34.541 } 00:06:34.541 }, 00:06:34.541 { 00:06:34.541 "method": "bdev_nvme_set_hotplug", 00:06:34.541 "params": { 00:06:34.541 "period_us": 100000, 00:06:34.541 "enable": false 00:06:34.541 } 00:06:34.541 }, 00:06:34.541 { 00:06:34.541 "method": "bdev_iscsi_set_options", 00:06:34.541 "params": { 00:06:34.541 "timeout_sec": 30 00:06:34.541 } 00:06:34.541 }, 00:06:34.541 { 00:06:34.541 "method": "bdev_wait_for_examine" 00:06:34.541 } 00:06:34.541 ] 00:06:34.541 }, 00:06:34.541 { 00:06:34.541 "subsystem": "nvmf", 00:06:34.541 "config": [ 00:06:34.541 { 00:06:34.541 "method": "nvmf_set_config", 00:06:34.541 "params": { 00:06:34.541 "discovery_filter": "match_any", 00:06:34.541 "admin_cmd_passthru": { 00:06:34.541 "identify_ctrlr": false 00:06:34.541 }, 00:06:34.541 "dhchap_digests": [ 00:06:34.541 "sha256", 00:06:34.541 "sha384", 00:06:34.541 "sha512" 00:06:34.541 ], 00:06:34.541 "dhchap_dhgroups": [ 00:06:34.541 "null", 00:06:34.541 "ffdhe2048", 00:06:34.541 "ffdhe3072", 00:06:34.541 "ffdhe4096", 00:06:34.541 "ffdhe6144", 00:06:34.541 "ffdhe8192" 00:06:34.541 ] 00:06:34.541 } 00:06:34.541 }, 00:06:34.541 { 00:06:34.541 "method": "nvmf_set_max_subsystems", 00:06:34.541 "params": { 00:06:34.541 "max_subsystems": 1024 00:06:34.541 } 00:06:34.541 }, 00:06:34.541 { 00:06:34.541 "method": "nvmf_set_crdt", 00:06:34.541 "params": { 00:06:34.541 "crdt1": 0, 00:06:34.541 "crdt2": 0, 00:06:34.541 "crdt3": 0 00:06:34.541 } 00:06:34.541 }, 00:06:34.541 { 00:06:34.541 "method": "nvmf_create_transport", 00:06:34.541 "params": { 00:06:34.541 "trtype": "TCP", 00:06:34.541 "max_queue_depth": 128, 00:06:34.541 "max_io_qpairs_per_ctrlr": 127, 00:06:34.541 "in_capsule_data_size": 4096, 00:06:34.541 "max_io_size": 131072, 00:06:34.541 "io_unit_size": 131072, 00:06:34.541 "max_aq_depth": 128, 00:06:34.541 "num_shared_buffers": 511, 00:06:34.541 "buf_cache_size": 4294967295, 00:06:34.541 "dif_insert_or_strip": false, 00:06:34.541 "zcopy": false, 00:06:34.541 "c2h_success": true, 00:06:34.541 "sock_priority": 0, 00:06:34.541 "abort_timeout_sec": 1, 00:06:34.541 "ack_timeout": 0, 00:06:34.541 "data_wr_pool_size": 0 00:06:34.541 } 00:06:34.541 } 00:06:34.541 ] 00:06:34.541 }, 00:06:34.541 { 00:06:34.541 "subsystem": "nbd", 00:06:34.541 "config": [] 00:06:34.541 }, 00:06:34.541 { 00:06:34.541 "subsystem": "ublk", 00:06:34.541 "config": [] 00:06:34.541 }, 00:06:34.541 { 00:06:34.541 "subsystem": "vhost_blk", 00:06:34.541 "config": [] 00:06:34.541 }, 00:06:34.541 { 00:06:34.541 "subsystem": "scsi", 00:06:34.541 "config": null 00:06:34.541 }, 00:06:34.541 { 00:06:34.541 "subsystem": "iscsi", 00:06:34.541 "config": [ 00:06:34.541 { 00:06:34.541 "method": "iscsi_set_options", 00:06:34.541 "params": { 00:06:34.541 "node_base": "iqn.2016-06.io.spdk", 00:06:34.541 "max_sessions": 128, 00:06:34.541 "max_connections_per_session": 2, 00:06:34.541 "max_queue_depth": 64, 00:06:34.541 "default_time2wait": 2, 00:06:34.541 "default_time2retain": 20, 00:06:34.541 "first_burst_length": 8192, 00:06:34.541 "immediate_data": true, 00:06:34.541 "allow_duplicated_isid": false, 00:06:34.541 "error_recovery_level": 0, 00:06:34.541 "nop_timeout": 60, 00:06:34.541 "nop_in_interval": 30, 00:06:34.541 "disable_chap": false, 00:06:34.541 "require_chap": false, 00:06:34.541 "mutual_chap": false, 00:06:34.541 "chap_group": 0, 00:06:34.541 "max_large_datain_per_connection": 64, 00:06:34.541 "max_r2t_per_connection": 4, 00:06:34.541 "pdu_pool_size": 36864, 00:06:34.541 "immediate_data_pool_size": 16384, 00:06:34.541 "data_out_pool_size": 2048 00:06:34.541 } 00:06:34.541 } 00:06:34.541 ] 00:06:34.541 }, 00:06:34.541 { 00:06:34.541 "subsystem": "vhost_scsi", 00:06:34.541 "config": [] 00:06:34.541 } 00:06:34.541 ] 00:06:34.541 } 00:06:34.541 15:39:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:34.541 15:39:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 895654 00:06:34.541 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 895654 ']' 00:06:34.541 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 895654 00:06:34.541 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:34.541 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.541 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 895654 00:06:34.541 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.541 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.541 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 895654' 00:06:34.541 killing process with pid 895654 00:06:34.541 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 895654 00:06:34.541 15:39:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 895654 00:06:34.800 15:39:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=895757 00:06:34.800 15:39:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:34.800 15:39:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:40.068 15:39:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 895757 00:06:40.068 15:39:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 895757 ']' 00:06:40.068 15:39:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 895757 00:06:40.068 15:39:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:40.068 15:39:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.068 15:39:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 895757 00:06:40.068 15:39:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.068 15:39:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.068 15:39:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 895757' 00:06:40.068 killing process with pid 895757 00:06:40.068 15:39:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 895757 00:06:40.068 15:39:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 895757 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:40.328 00:06:40.328 real 0m6.247s 00:06:40.328 user 0m5.957s 00:06:40.328 sys 0m0.598s 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:40.328 ************************************ 00:06:40.328 END TEST skip_rpc_with_json 00:06:40.328 ************************************ 00:06:40.328 15:39:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:40.328 15:39:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.328 15:39:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.328 15:39:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.328 ************************************ 00:06:40.328 START TEST skip_rpc_with_delay 00:06:40.328 ************************************ 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:40.328 [2024-12-09 15:39:35.430283] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.328 00:06:40.328 real 0m0.027s 00:06:40.328 user 0m0.012s 00:06:40.328 sys 0m0.015s 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.328 15:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:40.328 ************************************ 00:06:40.328 END TEST skip_rpc_with_delay 00:06:40.328 ************************************ 00:06:40.328 15:39:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:40.328 15:39:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:40.328 15:39:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:40.328 15:39:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.328 15:39:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.328 15:39:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.328 ************************************ 00:06:40.328 START TEST exit_on_failed_rpc_init 00:06:40.328 ************************************ 00:06:40.328 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:40.328 15:39:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=896520 00:06:40.328 15:39:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 896520 00:06:40.328 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 896520 ']' 00:06:40.328 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.328 15:39:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.328 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.328 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.328 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.328 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:40.328 [2024-12-09 15:39:35.532296] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:40.328 [2024-12-09 15:39:35.532339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid896520 ] 00:06:40.587 [2024-12-09 15:39:35.601157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.587 [2024-12-09 15:39:35.649793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.849 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.849 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:40.849 15:39:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:40.849 15:39:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.849 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:40.849 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.849 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.849 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.849 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.849 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.849 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.849 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.849 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.849 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:40.849 15:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.849 [2024-12-09 15:39:35.879297] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:40.849 [2024-12-09 15:39:35.879361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid896646 ] 00:06:40.849 [2024-12-09 15:39:35.948815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.849 [2024-12-09 15:39:35.993232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.849 [2024-12-09 15:39:35.993303] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:40.849 [2024-12-09 15:39:35.993316] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:40.849 [2024-12-09 15:39:35.993324] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.849 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:40.849 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.849 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:40.849 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:40.849 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:40.849 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.849 15:39:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:40.849 15:39:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 896520 00:06:40.849 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 896520 ']' 00:06:40.849 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 896520 00:06:40.849 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:40.849 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.849 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 896520 00:06:41.108 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.108 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.108 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 896520' 00:06:41.108 killing process with pid 896520 00:06:41.108 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 896520 00:06:41.108 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 896520 00:06:41.368 00:06:41.368 real 0m0.866s 00:06:41.368 user 0m0.885s 00:06:41.368 sys 0m0.375s 00:06:41.368 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.368 15:39:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:41.368 ************************************ 00:06:41.368 END TEST exit_on_failed_rpc_init 00:06:41.368 ************************************ 00:06:41.368 15:39:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:41.368 00:06:41.368 real 0m13.002s 00:06:41.368 user 0m12.201s 00:06:41.368 sys 0m1.581s 00:06:41.368 15:39:36 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.368 15:39:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.368 ************************************ 00:06:41.368 END TEST skip_rpc 00:06:41.368 ************************************ 00:06:41.368 15:39:36 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:41.368 15:39:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.368 15:39:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.368 15:39:36 -- common/autotest_common.sh@10 -- # set +x 00:06:41.368 ************************************ 00:06:41.368 START TEST rpc_client 00:06:41.368 ************************************ 00:06:41.368 15:39:36 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:41.627 * Looking for test storage... 00:06:41.627 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:06:41.627 15:39:36 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:41.627 15:39:36 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:41.627 15:39:36 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:41.627 15:39:36 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:41.627 15:39:36 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.627 15:39:36 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.627 15:39:36 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.627 15:39:36 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.627 15:39:36 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.627 15:39:36 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.627 15:39:36 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.627 15:39:36 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.628 15:39:36 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:41.628 15:39:36 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.628 15:39:36 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:41.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.628 --rc genhtml_branch_coverage=1 00:06:41.628 --rc genhtml_function_coverage=1 00:06:41.628 --rc genhtml_legend=1 00:06:41.628 --rc geninfo_all_blocks=1 00:06:41.628 --rc geninfo_unexecuted_blocks=1 00:06:41.628 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.628 ' 00:06:41.628 15:39:36 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:41.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.628 --rc genhtml_branch_coverage=1 00:06:41.628 --rc genhtml_function_coverage=1 00:06:41.628 --rc genhtml_legend=1 00:06:41.628 --rc geninfo_all_blocks=1 00:06:41.628 --rc geninfo_unexecuted_blocks=1 00:06:41.628 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.628 ' 00:06:41.628 15:39:36 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:41.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.628 --rc genhtml_branch_coverage=1 00:06:41.628 --rc genhtml_function_coverage=1 00:06:41.628 --rc genhtml_legend=1 00:06:41.628 --rc geninfo_all_blocks=1 00:06:41.628 --rc geninfo_unexecuted_blocks=1 00:06:41.628 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.628 ' 00:06:41.628 15:39:36 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:41.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.628 --rc genhtml_branch_coverage=1 00:06:41.628 --rc genhtml_function_coverage=1 00:06:41.628 --rc genhtml_legend=1 00:06:41.628 --rc geninfo_all_blocks=1 00:06:41.628 --rc geninfo_unexecuted_blocks=1 00:06:41.628 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.628 ' 00:06:41.628 15:39:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:41.628 OK 00:06:41.628 15:39:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:41.628 00:06:41.628 real 0m0.213s 00:06:41.628 user 0m0.117s 00:06:41.628 sys 0m0.112s 00:06:41.628 15:39:36 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.628 15:39:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:41.628 ************************************ 00:06:41.628 END TEST rpc_client 00:06:41.628 ************************************ 00:06:41.628 15:39:36 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:41.628 15:39:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.628 15:39:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.628 15:39:36 -- common/autotest_common.sh@10 -- # set +x 00:06:41.628 ************************************ 00:06:41.628 START TEST json_config 00:06:41.628 ************************************ 00:06:41.628 15:39:36 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:41.888 15:39:36 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:41.888 15:39:36 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:41.888 15:39:36 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:41.888 15:39:36 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:41.888 15:39:36 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.888 15:39:36 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.888 15:39:36 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.888 15:39:36 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.888 15:39:36 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.888 15:39:36 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.888 15:39:36 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.888 15:39:36 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.888 15:39:36 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.888 15:39:36 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.888 15:39:36 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.888 15:39:36 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:41.888 15:39:36 json_config -- scripts/common.sh@345 -- # : 1 00:06:41.888 15:39:36 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.888 15:39:36 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.888 15:39:36 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:41.888 15:39:36 json_config -- scripts/common.sh@353 -- # local d=1 00:06:41.888 15:39:36 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.888 15:39:36 json_config -- scripts/common.sh@355 -- # echo 1 00:06:41.888 15:39:36 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.888 15:39:36 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:41.888 15:39:36 json_config -- scripts/common.sh@353 -- # local d=2 00:06:41.888 15:39:36 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.888 15:39:36 json_config -- scripts/common.sh@355 -- # echo 2 00:06:41.888 15:39:36 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.888 15:39:36 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.888 15:39:36 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.888 15:39:36 json_config -- scripts/common.sh@368 -- # return 0 00:06:41.888 15:39:36 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.888 15:39:36 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:41.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.888 --rc genhtml_branch_coverage=1 00:06:41.888 --rc genhtml_function_coverage=1 00:06:41.888 --rc genhtml_legend=1 00:06:41.888 --rc geninfo_all_blocks=1 00:06:41.888 --rc geninfo_unexecuted_blocks=1 00:06:41.888 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.888 ' 00:06:41.888 15:39:36 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:41.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.888 --rc genhtml_branch_coverage=1 00:06:41.888 --rc genhtml_function_coverage=1 00:06:41.888 --rc genhtml_legend=1 00:06:41.888 --rc geninfo_all_blocks=1 00:06:41.888 --rc geninfo_unexecuted_blocks=1 00:06:41.888 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.889 ' 00:06:41.889 15:39:36 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:41.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.889 --rc genhtml_branch_coverage=1 00:06:41.889 --rc genhtml_function_coverage=1 00:06:41.889 --rc genhtml_legend=1 00:06:41.889 --rc geninfo_all_blocks=1 00:06:41.889 --rc geninfo_unexecuted_blocks=1 00:06:41.889 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.889 ' 00:06:41.889 15:39:36 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:41.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.889 --rc genhtml_branch_coverage=1 00:06:41.889 --rc genhtml_function_coverage=1 00:06:41.889 --rc genhtml_legend=1 00:06:41.889 --rc geninfo_all_blocks=1 00:06:41.889 --rc geninfo_unexecuted_blocks=1 00:06:41.889 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.889 ' 00:06:41.889 15:39:36 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:41.889 15:39:36 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:41.889 15:39:36 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.889 15:39:36 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.889 15:39:36 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.889 15:39:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.889 15:39:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.889 15:39:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.889 15:39:36 json_config -- paths/export.sh@5 -- # export PATH 00:06:41.889 15:39:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@51 -- # : 0 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:41.889 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:41.889 15:39:36 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:41.889 15:39:36 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:41.889 15:39:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:41.889 15:39:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:41.889 15:39:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:41.889 15:39:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:41.889 15:39:36 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:41.889 WARNING: No tests are enabled so not running JSON configuration tests 00:06:41.889 15:39:36 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:41.889 00:06:41.889 real 0m0.172s 00:06:41.889 user 0m0.096s 00:06:41.889 sys 0m0.084s 00:06:41.889 15:39:36 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.889 15:39:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.889 ************************************ 00:06:41.889 END TEST json_config 00:06:41.889 ************************************ 00:06:41.889 15:39:37 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:41.889 15:39:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.889 15:39:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.889 15:39:37 -- common/autotest_common.sh@10 -- # set +x 00:06:41.889 ************************************ 00:06:41.889 START TEST json_config_extra_key 00:06:41.889 ************************************ 00:06:41.889 15:39:37 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:42.149 15:39:37 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:42.149 15:39:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:42.149 15:39:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:42.149 15:39:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.149 15:39:37 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:42.149 15:39:37 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.149 15:39:37 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:42.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.149 --rc genhtml_branch_coverage=1 00:06:42.150 --rc genhtml_function_coverage=1 00:06:42.150 --rc genhtml_legend=1 00:06:42.150 --rc geninfo_all_blocks=1 00:06:42.150 --rc geninfo_unexecuted_blocks=1 00:06:42.150 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:42.150 ' 00:06:42.150 15:39:37 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:42.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.150 --rc genhtml_branch_coverage=1 00:06:42.150 --rc genhtml_function_coverage=1 00:06:42.150 --rc genhtml_legend=1 00:06:42.150 --rc geninfo_all_blocks=1 00:06:42.150 --rc geninfo_unexecuted_blocks=1 00:06:42.150 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:42.150 ' 00:06:42.150 15:39:37 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:42.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.150 --rc genhtml_branch_coverage=1 00:06:42.150 --rc genhtml_function_coverage=1 00:06:42.150 --rc genhtml_legend=1 00:06:42.150 --rc geninfo_all_blocks=1 00:06:42.150 --rc geninfo_unexecuted_blocks=1 00:06:42.150 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:42.150 ' 00:06:42.150 15:39:37 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:42.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.150 --rc genhtml_branch_coverage=1 00:06:42.150 --rc genhtml_function_coverage=1 00:06:42.150 --rc genhtml_legend=1 00:06:42.150 --rc geninfo_all_blocks=1 00:06:42.150 --rc geninfo_unexecuted_blocks=1 00:06:42.150 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:42.150 ' 00:06:42.150 15:39:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:42.150 15:39:37 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:42.150 15:39:37 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.150 15:39:37 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.150 15:39:37 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.150 15:39:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.150 15:39:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.150 15:39:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.150 15:39:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:42.150 15:39:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:42.150 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:42.150 15:39:37 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:42.150 15:39:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:42.150 15:39:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:42.150 15:39:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:42.150 15:39:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:42.150 15:39:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:42.150 15:39:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:42.150 15:39:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:42.150 15:39:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:42.150 15:39:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:42.150 15:39:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:42.150 15:39:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:42.150 INFO: launching applications... 00:06:42.150 15:39:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:42.150 15:39:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:42.150 15:39:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:42.150 15:39:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:42.150 15:39:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:42.150 15:39:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:42.150 15:39:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:42.150 15:39:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:42.150 15:39:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=897026 00:06:42.150 15:39:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:42.150 Waiting for target to run... 00:06:42.150 15:39:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 897026 /var/tmp/spdk_tgt.sock 00:06:42.150 15:39:37 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 897026 ']' 00:06:42.150 15:39:37 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:42.150 15:39:37 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:42.150 15:39:37 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.150 15:39:37 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:42.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:42.150 15:39:37 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.150 15:39:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:42.150 [2024-12-09 15:39:37.267273] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:42.150 [2024-12-09 15:39:37.267344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid897026 ] 00:06:42.718 [2024-12-09 15:39:37.714698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.718 [2024-12-09 15:39:37.763457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.977 15:39:38 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.977 15:39:38 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:42.977 15:39:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:42.977 00:06:42.977 15:39:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:42.977 INFO: shutting down applications... 00:06:42.977 15:39:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:42.977 15:39:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:42.977 15:39:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:42.977 15:39:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 897026 ]] 00:06:42.977 15:39:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 897026 00:06:42.977 15:39:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:42.977 15:39:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.977 15:39:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 897026 00:06:42.977 15:39:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:43.546 15:39:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:43.546 15:39:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:43.546 15:39:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 897026 00:06:43.546 15:39:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:43.546 15:39:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:43.546 15:39:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:43.546 15:39:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:43.546 SPDK target shutdown done 00:06:43.546 15:39:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:43.546 Success 00:06:43.546 00:06:43.546 real 0m1.574s 00:06:43.546 user 0m1.158s 00:06:43.546 sys 0m0.588s 00:06:43.546 15:39:38 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.546 15:39:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:43.546 ************************************ 00:06:43.546 END TEST json_config_extra_key 00:06:43.546 ************************************ 00:06:43.546 15:39:38 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:43.546 15:39:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.546 15:39:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.546 15:39:38 -- common/autotest_common.sh@10 -- # set +x 00:06:43.546 ************************************ 00:06:43.546 START TEST alias_rpc 00:06:43.546 ************************************ 00:06:43.546 15:39:38 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:43.806 * Looking for test storage... 00:06:43.806 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:06:43.806 15:39:38 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:43.806 15:39:38 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:43.806 15:39:38 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:43.806 15:39:38 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.806 15:39:38 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:43.806 15:39:38 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.806 15:39:38 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:43.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.806 --rc genhtml_branch_coverage=1 00:06:43.806 --rc genhtml_function_coverage=1 00:06:43.806 --rc genhtml_legend=1 00:06:43.806 --rc geninfo_all_blocks=1 00:06:43.806 --rc geninfo_unexecuted_blocks=1 00:06:43.806 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:43.806 ' 00:06:43.806 15:39:38 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:43.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.806 --rc genhtml_branch_coverage=1 00:06:43.806 --rc genhtml_function_coverage=1 00:06:43.806 --rc genhtml_legend=1 00:06:43.806 --rc geninfo_all_blocks=1 00:06:43.806 --rc geninfo_unexecuted_blocks=1 00:06:43.806 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:43.806 ' 00:06:43.806 15:39:38 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:43.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.806 --rc genhtml_branch_coverage=1 00:06:43.806 --rc genhtml_function_coverage=1 00:06:43.806 --rc genhtml_legend=1 00:06:43.806 --rc geninfo_all_blocks=1 00:06:43.806 --rc geninfo_unexecuted_blocks=1 00:06:43.806 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:43.806 ' 00:06:43.806 15:39:38 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:43.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.806 --rc genhtml_branch_coverage=1 00:06:43.806 --rc genhtml_function_coverage=1 00:06:43.806 --rc genhtml_legend=1 00:06:43.806 --rc geninfo_all_blocks=1 00:06:43.806 --rc geninfo_unexecuted_blocks=1 00:06:43.806 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:43.806 ' 00:06:43.806 15:39:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:43.806 15:39:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=897268 00:06:43.806 15:39:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:43.806 15:39:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 897268 00:06:43.806 15:39:38 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 897268 ']' 00:06:43.806 15:39:38 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.806 15:39:38 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.806 15:39:38 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.806 15:39:38 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.806 15:39:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.806 [2024-12-09 15:39:38.910962] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:43.806 [2024-12-09 15:39:38.911029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid897268 ] 00:06:43.806 [2024-12-09 15:39:38.981823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.806 [2024-12-09 15:39:39.030220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.065 15:39:39 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.065 15:39:39 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:44.065 15:39:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:44.325 15:39:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 897268 00:06:44.325 15:39:39 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 897268 ']' 00:06:44.325 15:39:39 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 897268 00:06:44.325 15:39:39 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:44.325 15:39:39 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.325 15:39:39 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 897268 00:06:44.325 15:39:39 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.325 15:39:39 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.325 15:39:39 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 897268' 00:06:44.325 killing process with pid 897268 00:06:44.325 15:39:39 alias_rpc -- common/autotest_common.sh@973 -- # kill 897268 00:06:44.325 15:39:39 alias_rpc -- common/autotest_common.sh@978 -- # wait 897268 00:06:44.894 00:06:44.894 real 0m1.120s 00:06:44.894 user 0m1.109s 00:06:44.894 sys 0m0.450s 00:06:44.894 15:39:39 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.894 15:39:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.894 ************************************ 00:06:44.894 END TEST alias_rpc 00:06:44.894 ************************************ 00:06:44.894 15:39:39 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:44.894 15:39:39 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:44.894 15:39:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.894 15:39:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.894 15:39:39 -- common/autotest_common.sh@10 -- # set +x 00:06:44.894 ************************************ 00:06:44.894 START TEST spdkcli_tcp 00:06:44.894 ************************************ 00:06:44.894 15:39:39 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:44.894 * Looking for test storage... 00:06:44.894 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:06:44.894 15:39:39 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:44.894 15:39:39 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:44.894 15:39:39 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:44.894 15:39:40 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:44.894 15:39:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.894 15:39:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.894 15:39:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.894 15:39:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.894 15:39:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.894 15:39:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.894 15:39:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.894 15:39:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.894 15:39:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.894 15:39:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.894 15:39:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.894 15:39:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:44.894 15:39:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:44.894 15:39:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.894 15:39:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.895 15:39:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:44.895 15:39:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:44.895 15:39:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.895 15:39:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:44.895 15:39:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.895 15:39:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:44.895 15:39:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:44.895 15:39:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.895 15:39:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:44.895 15:39:40 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.895 15:39:40 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.895 15:39:40 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.895 15:39:40 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:44.895 15:39:40 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.895 15:39:40 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:44.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.895 --rc genhtml_branch_coverage=1 00:06:44.895 --rc genhtml_function_coverage=1 00:06:44.895 --rc genhtml_legend=1 00:06:44.895 --rc geninfo_all_blocks=1 00:06:44.895 --rc geninfo_unexecuted_blocks=1 00:06:44.895 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:44.895 ' 00:06:44.895 15:39:40 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:44.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.895 --rc genhtml_branch_coverage=1 00:06:44.895 --rc genhtml_function_coverage=1 00:06:44.895 --rc genhtml_legend=1 00:06:44.895 --rc geninfo_all_blocks=1 00:06:44.895 --rc geninfo_unexecuted_blocks=1 00:06:44.895 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:44.895 ' 00:06:44.895 15:39:40 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:44.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.895 --rc genhtml_branch_coverage=1 00:06:44.895 --rc genhtml_function_coverage=1 00:06:44.895 --rc genhtml_legend=1 00:06:44.895 --rc geninfo_all_blocks=1 00:06:44.895 --rc geninfo_unexecuted_blocks=1 00:06:44.895 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:44.895 ' 00:06:44.895 15:39:40 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:44.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.895 --rc genhtml_branch_coverage=1 00:06:44.895 --rc genhtml_function_coverage=1 00:06:44.895 --rc genhtml_legend=1 00:06:44.895 --rc geninfo_all_blocks=1 00:06:44.895 --rc geninfo_unexecuted_blocks=1 00:06:44.895 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:44.895 ' 00:06:44.895 15:39:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:06:44.895 15:39:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:44.895 15:39:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:06:44.895 15:39:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:44.895 15:39:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:44.895 15:39:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:44.895 15:39:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:44.895 15:39:40 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:44.895 15:39:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.895 15:39:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=897501 00:06:44.895 15:39:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:44.895 15:39:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 897501 00:06:44.895 15:39:40 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 897501 ']' 00:06:44.895 15:39:40 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.895 15:39:40 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.895 15:39:40 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.895 15:39:40 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.895 15:39:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.895 [2024-12-09 15:39:40.102464] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:44.895 [2024-12-09 15:39:40.102532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid897501 ] 00:06:45.155 [2024-12-09 15:39:40.174766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.155 [2024-12-09 15:39:40.220229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.155 [2024-12-09 15:39:40.220231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.415 15:39:40 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.415 15:39:40 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:45.415 15:39:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=897514 00:06:45.415 15:39:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:45.415 15:39:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:45.415 [ 00:06:45.415 "spdk_get_version", 00:06:45.415 "rpc_get_methods", 00:06:45.415 "notify_get_notifications", 00:06:45.415 "notify_get_types", 00:06:45.415 "trace_get_info", 00:06:45.415 "trace_get_tpoint_group_mask", 00:06:45.415 "trace_disable_tpoint_group", 00:06:45.415 "trace_enable_tpoint_group", 00:06:45.415 "trace_clear_tpoint_mask", 00:06:45.415 "trace_set_tpoint_mask", 00:06:45.415 "fsdev_set_opts", 00:06:45.415 "fsdev_get_opts", 00:06:45.415 "framework_get_pci_devices", 00:06:45.415 "framework_get_config", 00:06:45.415 "framework_get_subsystems", 00:06:45.415 "vfu_tgt_set_base_path", 00:06:45.415 "keyring_get_keys", 00:06:45.415 "iobuf_get_stats", 00:06:45.415 "iobuf_set_options", 00:06:45.415 "sock_get_default_impl", 00:06:45.415 "sock_set_default_impl", 00:06:45.415 "sock_impl_set_options", 00:06:45.415 "sock_impl_get_options", 00:06:45.415 "vmd_rescan", 00:06:45.415 "vmd_remove_device", 00:06:45.415 "vmd_enable", 00:06:45.415 "accel_get_stats", 00:06:45.415 "accel_set_options", 00:06:45.415 "accel_set_driver", 00:06:45.415 "accel_crypto_key_destroy", 00:06:45.415 "accel_crypto_keys_get", 00:06:45.415 "accel_crypto_key_create", 00:06:45.415 "accel_assign_opc", 00:06:45.415 "accel_get_module_info", 00:06:45.415 "accel_get_opc_assignments", 00:06:45.415 "bdev_get_histogram", 00:06:45.415 "bdev_enable_histogram", 00:06:45.415 "bdev_set_qos_limit", 00:06:45.415 "bdev_set_qd_sampling_period", 00:06:45.415 "bdev_get_bdevs", 00:06:45.415 "bdev_reset_iostat", 00:06:45.415 "bdev_get_iostat", 00:06:45.415 "bdev_examine", 00:06:45.415 "bdev_wait_for_examine", 00:06:45.415 "bdev_set_options", 00:06:45.415 "scsi_get_devices", 00:06:45.415 "thread_set_cpumask", 00:06:45.415 "scheduler_set_options", 00:06:45.415 "framework_get_governor", 00:06:45.415 "framework_get_scheduler", 00:06:45.415 "framework_set_scheduler", 00:06:45.415 "framework_get_reactors", 00:06:45.415 "thread_get_io_channels", 00:06:45.415 "thread_get_pollers", 00:06:45.415 "thread_get_stats", 00:06:45.415 "framework_monitor_context_switch", 00:06:45.415 "spdk_kill_instance", 00:06:45.415 "log_enable_timestamps", 00:06:45.415 "log_get_flags", 00:06:45.415 "log_clear_flag", 00:06:45.415 "log_set_flag", 00:06:45.415 "log_get_level", 00:06:45.415 "log_set_level", 00:06:45.415 "log_get_print_level", 00:06:45.415 "log_set_print_level", 00:06:45.415 "framework_enable_cpumask_locks", 00:06:45.415 "framework_disable_cpumask_locks", 00:06:45.415 "framework_wait_init", 00:06:45.415 "framework_start_init", 00:06:45.415 "virtio_blk_create_transport", 00:06:45.415 "virtio_blk_get_transports", 00:06:45.415 "vhost_controller_set_coalescing", 00:06:45.415 "vhost_get_controllers", 00:06:45.415 "vhost_delete_controller", 00:06:45.415 "vhost_create_blk_controller", 00:06:45.415 "vhost_scsi_controller_remove_target", 00:06:45.415 "vhost_scsi_controller_add_target", 00:06:45.415 "vhost_start_scsi_controller", 00:06:45.415 "vhost_create_scsi_controller", 00:06:45.415 "ublk_recover_disk", 00:06:45.415 "ublk_get_disks", 00:06:45.415 "ublk_stop_disk", 00:06:45.415 "ublk_start_disk", 00:06:45.415 "ublk_destroy_target", 00:06:45.415 "ublk_create_target", 00:06:45.415 "nbd_get_disks", 00:06:45.415 "nbd_stop_disk", 00:06:45.415 "nbd_start_disk", 00:06:45.415 "env_dpdk_get_mem_stats", 00:06:45.415 "nvmf_stop_mdns_prr", 00:06:45.415 "nvmf_publish_mdns_prr", 00:06:45.415 "nvmf_subsystem_get_listeners", 00:06:45.415 "nvmf_subsystem_get_qpairs", 00:06:45.415 "nvmf_subsystem_get_controllers", 00:06:45.415 "nvmf_get_stats", 00:06:45.415 "nvmf_get_transports", 00:06:45.415 "nvmf_create_transport", 00:06:45.415 "nvmf_get_targets", 00:06:45.415 "nvmf_delete_target", 00:06:45.415 "nvmf_create_target", 00:06:45.415 "nvmf_subsystem_allow_any_host", 00:06:45.415 "nvmf_subsystem_set_keys", 00:06:45.415 "nvmf_subsystem_remove_host", 00:06:45.415 "nvmf_subsystem_add_host", 00:06:45.415 "nvmf_ns_remove_host", 00:06:45.415 "nvmf_ns_add_host", 00:06:45.415 "nvmf_subsystem_remove_ns", 00:06:45.415 "nvmf_subsystem_set_ns_ana_group", 00:06:45.415 "nvmf_subsystem_add_ns", 00:06:45.415 "nvmf_subsystem_listener_set_ana_state", 00:06:45.415 "nvmf_discovery_get_referrals", 00:06:45.415 "nvmf_discovery_remove_referral", 00:06:45.415 "nvmf_discovery_add_referral", 00:06:45.415 "nvmf_subsystem_remove_listener", 00:06:45.415 "nvmf_subsystem_add_listener", 00:06:45.415 "nvmf_delete_subsystem", 00:06:45.415 "nvmf_create_subsystem", 00:06:45.415 "nvmf_get_subsystems", 00:06:45.415 "nvmf_set_crdt", 00:06:45.415 "nvmf_set_config", 00:06:45.415 "nvmf_set_max_subsystems", 00:06:45.415 "iscsi_get_histogram", 00:06:45.415 "iscsi_enable_histogram", 00:06:45.415 "iscsi_set_options", 00:06:45.415 "iscsi_get_auth_groups", 00:06:45.415 "iscsi_auth_group_remove_secret", 00:06:45.415 "iscsi_auth_group_add_secret", 00:06:45.415 "iscsi_delete_auth_group", 00:06:45.415 "iscsi_create_auth_group", 00:06:45.415 "iscsi_set_discovery_auth", 00:06:45.415 "iscsi_get_options", 00:06:45.415 "iscsi_target_node_request_logout", 00:06:45.415 "iscsi_target_node_set_redirect", 00:06:45.415 "iscsi_target_node_set_auth", 00:06:45.415 "iscsi_target_node_add_lun", 00:06:45.415 "iscsi_get_stats", 00:06:45.415 "iscsi_get_connections", 00:06:45.415 "iscsi_portal_group_set_auth", 00:06:45.415 "iscsi_start_portal_group", 00:06:45.415 "iscsi_delete_portal_group", 00:06:45.415 "iscsi_create_portal_group", 00:06:45.415 "iscsi_get_portal_groups", 00:06:45.415 "iscsi_delete_target_node", 00:06:45.415 "iscsi_target_node_remove_pg_ig_maps", 00:06:45.415 "iscsi_target_node_add_pg_ig_maps", 00:06:45.415 "iscsi_create_target_node", 00:06:45.415 "iscsi_get_target_nodes", 00:06:45.415 "iscsi_delete_initiator_group", 00:06:45.415 "iscsi_initiator_group_remove_initiators", 00:06:45.415 "iscsi_initiator_group_add_initiators", 00:06:45.415 "iscsi_create_initiator_group", 00:06:45.415 "iscsi_get_initiator_groups", 00:06:45.415 "fsdev_aio_delete", 00:06:45.415 "fsdev_aio_create", 00:06:45.415 "keyring_linux_set_options", 00:06:45.415 "keyring_file_remove_key", 00:06:45.415 "keyring_file_add_key", 00:06:45.415 "vfu_virtio_create_fs_endpoint", 00:06:45.415 "vfu_virtio_create_scsi_endpoint", 00:06:45.415 "vfu_virtio_scsi_remove_target", 00:06:45.415 "vfu_virtio_scsi_add_target", 00:06:45.415 "vfu_virtio_create_blk_endpoint", 00:06:45.415 "vfu_virtio_delete_endpoint", 00:06:45.415 "iaa_scan_accel_module", 00:06:45.415 "dsa_scan_accel_module", 00:06:45.415 "ioat_scan_accel_module", 00:06:45.415 "accel_error_inject_error", 00:06:45.415 "bdev_iscsi_delete", 00:06:45.415 "bdev_iscsi_create", 00:06:45.415 "bdev_iscsi_set_options", 00:06:45.415 "bdev_virtio_attach_controller", 00:06:45.415 "bdev_virtio_scsi_get_devices", 00:06:45.415 "bdev_virtio_detach_controller", 00:06:45.415 "bdev_virtio_blk_set_hotplug", 00:06:45.415 "bdev_ftl_set_property", 00:06:45.415 "bdev_ftl_get_properties", 00:06:45.415 "bdev_ftl_get_stats", 00:06:45.415 "bdev_ftl_unmap", 00:06:45.415 "bdev_ftl_unload", 00:06:45.415 "bdev_ftl_delete", 00:06:45.415 "bdev_ftl_load", 00:06:45.415 "bdev_ftl_create", 00:06:45.415 "bdev_aio_delete", 00:06:45.415 "bdev_aio_rescan", 00:06:45.415 "bdev_aio_create", 00:06:45.415 "blobfs_create", 00:06:45.415 "blobfs_detect", 00:06:45.415 "blobfs_set_cache_size", 00:06:45.415 "bdev_zone_block_delete", 00:06:45.415 "bdev_zone_block_create", 00:06:45.415 "bdev_delay_delete", 00:06:45.416 "bdev_delay_create", 00:06:45.416 "bdev_delay_update_latency", 00:06:45.416 "bdev_split_delete", 00:06:45.416 "bdev_split_create", 00:06:45.416 "bdev_error_inject_error", 00:06:45.416 "bdev_error_delete", 00:06:45.416 "bdev_error_create", 00:06:45.416 "bdev_raid_set_options", 00:06:45.416 "bdev_raid_remove_base_bdev", 00:06:45.416 "bdev_raid_add_base_bdev", 00:06:45.416 "bdev_raid_delete", 00:06:45.416 "bdev_raid_create", 00:06:45.416 "bdev_raid_get_bdevs", 00:06:45.416 "bdev_lvol_set_parent_bdev", 00:06:45.416 "bdev_lvol_set_parent", 00:06:45.416 "bdev_lvol_check_shallow_copy", 00:06:45.416 "bdev_lvol_start_shallow_copy", 00:06:45.416 "bdev_lvol_grow_lvstore", 00:06:45.416 "bdev_lvol_get_lvols", 00:06:45.416 "bdev_lvol_get_lvstores", 00:06:45.416 "bdev_lvol_delete", 00:06:45.416 "bdev_lvol_set_read_only", 00:06:45.416 "bdev_lvol_resize", 00:06:45.416 "bdev_lvol_decouple_parent", 00:06:45.416 "bdev_lvol_inflate", 00:06:45.416 "bdev_lvol_rename", 00:06:45.416 "bdev_lvol_clone_bdev", 00:06:45.416 "bdev_lvol_clone", 00:06:45.416 "bdev_lvol_snapshot", 00:06:45.416 "bdev_lvol_create", 00:06:45.416 "bdev_lvol_delete_lvstore", 00:06:45.416 "bdev_lvol_rename_lvstore", 00:06:45.416 "bdev_lvol_create_lvstore", 00:06:45.416 "bdev_passthru_delete", 00:06:45.416 "bdev_passthru_create", 00:06:45.416 "bdev_nvme_cuse_unregister", 00:06:45.416 "bdev_nvme_cuse_register", 00:06:45.416 "bdev_opal_new_user", 00:06:45.416 "bdev_opal_set_lock_state", 00:06:45.416 "bdev_opal_delete", 00:06:45.416 "bdev_opal_get_info", 00:06:45.416 "bdev_opal_create", 00:06:45.416 "bdev_nvme_opal_revert", 00:06:45.416 "bdev_nvme_opal_init", 00:06:45.416 "bdev_nvme_send_cmd", 00:06:45.416 "bdev_nvme_set_keys", 00:06:45.416 "bdev_nvme_get_path_iostat", 00:06:45.416 "bdev_nvme_get_mdns_discovery_info", 00:06:45.416 "bdev_nvme_stop_mdns_discovery", 00:06:45.416 "bdev_nvme_start_mdns_discovery", 00:06:45.416 "bdev_nvme_set_multipath_policy", 00:06:45.416 "bdev_nvme_set_preferred_path", 00:06:45.416 "bdev_nvme_get_io_paths", 00:06:45.416 "bdev_nvme_remove_error_injection", 00:06:45.416 "bdev_nvme_add_error_injection", 00:06:45.416 "bdev_nvme_get_discovery_info", 00:06:45.416 "bdev_nvme_stop_discovery", 00:06:45.416 "bdev_nvme_start_discovery", 00:06:45.416 "bdev_nvme_get_controller_health_info", 00:06:45.416 "bdev_nvme_disable_controller", 00:06:45.416 "bdev_nvme_enable_controller", 00:06:45.416 "bdev_nvme_reset_controller", 00:06:45.416 "bdev_nvme_get_transport_statistics", 00:06:45.416 "bdev_nvme_apply_firmware", 00:06:45.416 "bdev_nvme_detach_controller", 00:06:45.416 "bdev_nvme_get_controllers", 00:06:45.416 "bdev_nvme_attach_controller", 00:06:45.416 "bdev_nvme_set_hotplug", 00:06:45.416 "bdev_nvme_set_options", 00:06:45.416 "bdev_null_resize", 00:06:45.416 "bdev_null_delete", 00:06:45.416 "bdev_null_create", 00:06:45.416 "bdev_malloc_delete", 00:06:45.416 "bdev_malloc_create" 00:06:45.416 ] 00:06:45.416 15:39:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:45.416 15:39:40 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.416 15:39:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.674 15:39:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:45.674 15:39:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 897501 00:06:45.674 15:39:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 897501 ']' 00:06:45.674 15:39:40 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 897501 00:06:45.674 15:39:40 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:45.674 15:39:40 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.674 15:39:40 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 897501 00:06:45.674 15:39:40 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.674 15:39:40 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.674 15:39:40 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 897501' 00:06:45.674 killing process with pid 897501 00:06:45.674 15:39:40 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 897501 00:06:45.674 15:39:40 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 897501 00:06:45.933 00:06:45.933 real 0m1.131s 00:06:45.933 user 0m1.873s 00:06:45.933 sys 0m0.494s 00:06:45.933 15:39:41 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.933 15:39:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.933 ************************************ 00:06:45.933 END TEST spdkcli_tcp 00:06:45.933 ************************************ 00:06:45.933 15:39:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:45.933 15:39:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.933 15:39:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.933 15:39:41 -- common/autotest_common.sh@10 -- # set +x 00:06:45.933 ************************************ 00:06:45.933 START TEST dpdk_mem_utility 00:06:45.933 ************************************ 00:06:45.933 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:46.193 * Looking for test storage... 00:06:46.193 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:06:46.193 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.193 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.193 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.193 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.193 15:39:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:46.193 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.193 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.193 --rc genhtml_branch_coverage=1 00:06:46.193 --rc genhtml_function_coverage=1 00:06:46.193 --rc genhtml_legend=1 00:06:46.193 --rc geninfo_all_blocks=1 00:06:46.193 --rc geninfo_unexecuted_blocks=1 00:06:46.193 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.193 ' 00:06:46.193 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.193 --rc genhtml_branch_coverage=1 00:06:46.193 --rc genhtml_function_coverage=1 00:06:46.193 --rc genhtml_legend=1 00:06:46.193 --rc geninfo_all_blocks=1 00:06:46.193 --rc geninfo_unexecuted_blocks=1 00:06:46.193 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.193 ' 00:06:46.193 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.193 --rc genhtml_branch_coverage=1 00:06:46.193 --rc genhtml_function_coverage=1 00:06:46.193 --rc genhtml_legend=1 00:06:46.193 --rc geninfo_all_blocks=1 00:06:46.193 --rc geninfo_unexecuted_blocks=1 00:06:46.193 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.193 ' 00:06:46.193 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.193 --rc genhtml_branch_coverage=1 00:06:46.193 --rc genhtml_function_coverage=1 00:06:46.193 --rc genhtml_legend=1 00:06:46.193 --rc geninfo_all_blocks=1 00:06:46.193 --rc geninfo_unexecuted_blocks=1 00:06:46.193 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.193 ' 00:06:46.193 15:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:46.193 15:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=897751 00:06:46.193 15:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:46.193 15:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 897751 00:06:46.193 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 897751 ']' 00:06:46.193 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.193 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.193 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.193 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.193 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:46.193 [2024-12-09 15:39:41.309154] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:46.193 [2024-12-09 15:39:41.309241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid897751 ] 00:06:46.193 [2024-12-09 15:39:41.381821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.452 [2024-12-09 15:39:41.431273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.452 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.452 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:46.452 15:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:46.452 15:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:46.452 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.452 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:46.452 { 00:06:46.452 "filename": "/tmp/spdk_mem_dump.txt" 00:06:46.452 } 00:06:46.452 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.452 15:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:46.712 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:46.712 1 heaps totaling size 818.000000 MiB 00:06:46.712 size: 818.000000 MiB heap id: 0 00:06:46.712 end heaps---------- 00:06:46.712 9 mempools totaling size 603.782043 MiB 00:06:46.712 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:46.712 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:46.712 size: 100.555481 MiB name: bdev_io_897751 00:06:46.712 size: 50.003479 MiB name: msgpool_897751 00:06:46.712 size: 36.509338 MiB name: fsdev_io_897751 00:06:46.712 size: 21.763794 MiB name: PDU_Pool 00:06:46.712 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:46.712 size: 4.133484 MiB name: evtpool_897751 00:06:46.712 size: 0.026123 MiB name: Session_Pool 00:06:46.712 end mempools------- 00:06:46.712 6 memzones totaling size 4.142822 MiB 00:06:46.712 size: 1.000366 MiB name: RG_ring_0_897751 00:06:46.712 size: 1.000366 MiB name: RG_ring_1_897751 00:06:46.712 size: 1.000366 MiB name: RG_ring_4_897751 00:06:46.712 size: 1.000366 MiB name: RG_ring_5_897751 00:06:46.712 size: 0.125366 MiB name: RG_ring_2_897751 00:06:46.712 size: 0.015991 MiB name: RG_ring_3_897751 00:06:46.712 end memzones------- 00:06:46.712 15:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:46.712 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:46.712 list of free elements. size: 10.852478 MiB 00:06:46.712 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:46.712 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:46.712 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:46.712 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:46.712 element at address: 0x200008000000 with size: 0.959839 MiB 00:06:46.712 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:46.712 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:46.712 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:46.712 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:06:46.712 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:46.712 element at address: 0x200003e00000 with size: 0.490723 MiB 00:06:46.712 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:46.712 element at address: 0x200010600000 with size: 0.481934 MiB 00:06:46.712 element at address: 0x200028200000 with size: 0.410034 MiB 00:06:46.712 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:46.712 list of standard malloc elements. size: 199.218628 MiB 00:06:46.712 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:06:46.712 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:06:46.712 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:46.712 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:46.712 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:46.712 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:46.712 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:46.712 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:46.712 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:46.712 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:46.712 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:46.712 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:46.712 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:46.712 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:46.712 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:46.712 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:46.712 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:46.712 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:46.712 element at address: 0x20000085b100 with size: 0.000183 MiB 00:06:46.712 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:06:46.712 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:06:46.712 element at address: 0x2000008df880 with size: 0.000183 MiB 00:06:46.712 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:46.712 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:46.712 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:46.712 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:46.712 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:06:46.712 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:06:46.712 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:06:46.712 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:06:46.712 element at address: 0x20001067b600 with size: 0.000183 MiB 00:06:46.712 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:06:46.712 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:06:46.712 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:46.712 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:46.712 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:46.712 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:46.712 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:46.712 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:46.712 element at address: 0x200028268f80 with size: 0.000183 MiB 00:06:46.712 element at address: 0x200028269040 with size: 0.000183 MiB 00:06:46.712 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:06:46.712 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:46.712 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:46.712 list of memzone associated elements. size: 607.928894 MiB 00:06:46.712 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:46.712 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:46.712 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:46.712 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:46.712 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:46.712 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_897751_0 00:06:46.712 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:46.712 associated memzone info: size: 48.002930 MiB name: MP_msgpool_897751_0 00:06:46.712 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:06:46.712 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_897751_0 00:06:46.713 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:46.713 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:46.713 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:46.713 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:46.713 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:46.713 associated memzone info: size: 3.000122 MiB name: MP_evtpool_897751_0 00:06:46.713 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:46.713 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_897751 00:06:46.713 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:46.713 associated memzone info: size: 1.007996 MiB name: MP_evtpool_897751 00:06:46.713 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:06:46.713 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:46.713 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:46.713 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:46.713 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:06:46.713 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:46.713 element at address: 0x200003efde40 with size: 1.008118 MiB 00:06:46.713 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:46.713 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:46.713 associated memzone info: size: 1.000366 MiB name: RG_ring_0_897751 00:06:46.713 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:46.713 associated memzone info: size: 1.000366 MiB name: RG_ring_1_897751 00:06:46.713 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:46.713 associated memzone info: size: 1.000366 MiB name: RG_ring_4_897751 00:06:46.713 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:46.713 associated memzone info: size: 1.000366 MiB name: RG_ring_5_897751 00:06:46.713 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:06:46.713 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_897751 00:06:46.713 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:46.713 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_897751 00:06:46.713 element at address: 0x20001067b780 with size: 0.500488 MiB 00:06:46.713 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:46.713 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:06:46.713 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:46.713 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:46.713 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:46.713 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:46.713 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_897751 00:06:46.713 element at address: 0x2000008df940 with size: 0.125488 MiB 00:06:46.713 associated memzone info: size: 0.125366 MiB name: RG_ring_2_897751 00:06:46.713 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:06:46.713 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:46.713 element at address: 0x200028269100 with size: 0.023743 MiB 00:06:46.713 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:46.713 element at address: 0x2000008db680 with size: 0.016113 MiB 00:06:46.713 associated memzone info: size: 0.015991 MiB name: RG_ring_3_897751 00:06:46.713 element at address: 0x20002826f240 with size: 0.002441 MiB 00:06:46.713 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:46.713 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:46.713 associated memzone info: size: 0.000183 MiB name: MP_msgpool_897751 00:06:46.713 element at address: 0x2000008db480 with size: 0.000305 MiB 00:06:46.713 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_897751 00:06:46.713 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:46.713 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_897751 00:06:46.713 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:06:46.713 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:46.713 15:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:46.713 15:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 897751 00:06:46.713 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 897751 ']' 00:06:46.713 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 897751 00:06:46.713 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:46.713 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.713 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 897751 00:06:46.713 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.713 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.713 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 897751' 00:06:46.713 killing process with pid 897751 00:06:46.713 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 897751 00:06:46.713 15:39:41 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 897751 00:06:46.972 00:06:46.972 real 0m1.007s 00:06:46.972 user 0m0.932s 00:06:46.972 sys 0m0.431s 00:06:46.972 15:39:42 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.972 15:39:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:46.973 ************************************ 00:06:46.973 END TEST dpdk_mem_utility 00:06:46.973 ************************************ 00:06:46.973 15:39:42 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:46.973 15:39:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.973 15:39:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.973 15:39:42 -- common/autotest_common.sh@10 -- # set +x 00:06:46.973 ************************************ 00:06:46.973 START TEST event 00:06:46.973 ************************************ 00:06:46.973 15:39:42 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:47.236 * Looking for test storage... 00:06:47.236 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:47.236 15:39:42 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:47.236 15:39:42 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:47.236 15:39:42 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:47.236 15:39:42 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:47.236 15:39:42 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.236 15:39:42 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.236 15:39:42 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.236 15:39:42 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.236 15:39:42 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.236 15:39:42 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.236 15:39:42 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.236 15:39:42 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.236 15:39:42 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.236 15:39:42 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.236 15:39:42 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.236 15:39:42 event -- scripts/common.sh@344 -- # case "$op" in 00:06:47.236 15:39:42 event -- scripts/common.sh@345 -- # : 1 00:06:47.236 15:39:42 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.236 15:39:42 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.236 15:39:42 event -- scripts/common.sh@365 -- # decimal 1 00:06:47.236 15:39:42 event -- scripts/common.sh@353 -- # local d=1 00:06:47.236 15:39:42 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.236 15:39:42 event -- scripts/common.sh@355 -- # echo 1 00:06:47.236 15:39:42 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.236 15:39:42 event -- scripts/common.sh@366 -- # decimal 2 00:06:47.236 15:39:42 event -- scripts/common.sh@353 -- # local d=2 00:06:47.236 15:39:42 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.236 15:39:42 event -- scripts/common.sh@355 -- # echo 2 00:06:47.236 15:39:42 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.236 15:39:42 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.236 15:39:42 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.236 15:39:42 event -- scripts/common.sh@368 -- # return 0 00:06:47.236 15:39:42 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.236 15:39:42 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:47.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.236 --rc genhtml_branch_coverage=1 00:06:47.236 --rc genhtml_function_coverage=1 00:06:47.236 --rc genhtml_legend=1 00:06:47.236 --rc geninfo_all_blocks=1 00:06:47.236 --rc geninfo_unexecuted_blocks=1 00:06:47.236 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:47.236 ' 00:06:47.236 15:39:42 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:47.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.236 --rc genhtml_branch_coverage=1 00:06:47.236 --rc genhtml_function_coverage=1 00:06:47.236 --rc genhtml_legend=1 00:06:47.236 --rc geninfo_all_blocks=1 00:06:47.236 --rc geninfo_unexecuted_blocks=1 00:06:47.236 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:47.236 ' 00:06:47.236 15:39:42 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:47.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.236 --rc genhtml_branch_coverage=1 00:06:47.236 --rc genhtml_function_coverage=1 00:06:47.236 --rc genhtml_legend=1 00:06:47.236 --rc geninfo_all_blocks=1 00:06:47.236 --rc geninfo_unexecuted_blocks=1 00:06:47.236 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:47.236 ' 00:06:47.236 15:39:42 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:47.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.236 --rc genhtml_branch_coverage=1 00:06:47.236 --rc genhtml_function_coverage=1 00:06:47.236 --rc genhtml_legend=1 00:06:47.236 --rc geninfo_all_blocks=1 00:06:47.236 --rc geninfo_unexecuted_blocks=1 00:06:47.236 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:47.236 ' 00:06:47.236 15:39:42 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:47.236 15:39:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:47.236 15:39:42 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:47.236 15:39:42 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:47.236 15:39:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.236 15:39:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.236 ************************************ 00:06:47.236 START TEST event_perf 00:06:47.236 ************************************ 00:06:47.236 15:39:42 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:47.236 Running I/O for 1 seconds...[2024-12-09 15:39:42.437237] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:47.236 [2024-12-09 15:39:42.437319] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid897928 ] 00:06:47.517 [2024-12-09 15:39:42.512964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:47.517 [2024-12-09 15:39:42.561283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.517 [2024-12-09 15:39:42.561367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.517 [2024-12-09 15:39:42.561446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.517 [2024-12-09 15:39:42.561448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.547 Running I/O for 1 seconds... 00:06:48.547 lcore 0: 190983 00:06:48.547 lcore 1: 190984 00:06:48.547 lcore 2: 190983 00:06:48.547 lcore 3: 190984 00:06:48.547 done. 00:06:48.547 00:06:48.547 real 0m1.184s 00:06:48.547 user 0m4.095s 00:06:48.547 sys 0m0.087s 00:06:48.547 15:39:43 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.547 15:39:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.547 ************************************ 00:06:48.547 END TEST event_perf 00:06:48.547 ************************************ 00:06:48.547 15:39:43 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:48.547 15:39:43 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:48.547 15:39:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.547 15:39:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.547 ************************************ 00:06:48.547 START TEST event_reactor 00:06:48.547 ************************************ 00:06:48.547 15:39:43 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:48.547 [2024-12-09 15:39:43.704681] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:48.547 [2024-12-09 15:39:43.704766] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898116 ] 00:06:48.833 [2024-12-09 15:39:43.782784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.833 [2024-12-09 15:39:43.827179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.769 test_start 00:06:49.769 oneshot 00:06:49.769 tick 100 00:06:49.769 tick 100 00:06:49.769 tick 250 00:06:49.769 tick 100 00:06:49.769 tick 100 00:06:49.769 tick 100 00:06:49.769 tick 250 00:06:49.769 tick 500 00:06:49.769 tick 100 00:06:49.769 tick 100 00:06:49.769 tick 250 00:06:49.769 tick 100 00:06:49.769 tick 100 00:06:49.769 test_end 00:06:49.769 00:06:49.769 real 0m1.181s 00:06:49.769 user 0m1.094s 00:06:49.769 sys 0m0.082s 00:06:49.769 15:39:44 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.769 15:39:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:49.769 ************************************ 00:06:49.769 END TEST event_reactor 00:06:49.769 ************************************ 00:06:49.769 15:39:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:49.769 15:39:44 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:49.769 15:39:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.769 15:39:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.769 ************************************ 00:06:49.769 START TEST event_reactor_perf 00:06:49.769 ************************************ 00:06:49.769 15:39:44 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:49.769 [2024-12-09 15:39:44.965867] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:49.769 [2024-12-09 15:39:44.965952] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898272 ] 00:06:50.028 [2024-12-09 15:39:45.044040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.028 [2024-12-09 15:39:45.088700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.965 test_start 00:06:50.965 test_end 00:06:50.965 Performance: 922956 events per second 00:06:50.965 00:06:50.965 real 0m1.183s 00:06:50.965 user 0m1.098s 00:06:50.965 sys 0m0.081s 00:06:50.965 15:39:46 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.965 15:39:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:50.965 ************************************ 00:06:50.965 END TEST event_reactor_perf 00:06:50.965 ************************************ 00:06:50.965 15:39:46 event -- event/event.sh@49 -- # uname -s 00:06:50.965 15:39:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:50.965 15:39:46 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:50.965 15:39:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.965 15:39:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.965 15:39:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.224 ************************************ 00:06:51.224 START TEST event_scheduler 00:06:51.224 ************************************ 00:06:51.224 15:39:46 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:51.224 * Looking for test storage... 00:06:51.224 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:06:51.224 15:39:46 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:51.224 15:39:46 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:51.224 15:39:46 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:51.224 15:39:46 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.224 15:39:46 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:51.224 15:39:46 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.224 15:39:46 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:51.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.224 --rc genhtml_branch_coverage=1 00:06:51.224 --rc genhtml_function_coverage=1 00:06:51.224 --rc genhtml_legend=1 00:06:51.224 --rc geninfo_all_blocks=1 00:06:51.224 --rc geninfo_unexecuted_blocks=1 00:06:51.224 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:51.224 ' 00:06:51.224 15:39:46 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:51.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.224 --rc genhtml_branch_coverage=1 00:06:51.224 --rc genhtml_function_coverage=1 00:06:51.224 --rc genhtml_legend=1 00:06:51.224 --rc geninfo_all_blocks=1 00:06:51.224 --rc geninfo_unexecuted_blocks=1 00:06:51.224 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:51.224 ' 00:06:51.224 15:39:46 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:51.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.224 --rc genhtml_branch_coverage=1 00:06:51.224 --rc genhtml_function_coverage=1 00:06:51.224 --rc genhtml_legend=1 00:06:51.224 --rc geninfo_all_blocks=1 00:06:51.224 --rc geninfo_unexecuted_blocks=1 00:06:51.224 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:51.224 ' 00:06:51.224 15:39:46 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:51.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.224 --rc genhtml_branch_coverage=1 00:06:51.224 --rc genhtml_function_coverage=1 00:06:51.224 --rc genhtml_legend=1 00:06:51.224 --rc geninfo_all_blocks=1 00:06:51.224 --rc geninfo_unexecuted_blocks=1 00:06:51.224 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:51.224 ' 00:06:51.224 15:39:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:51.224 15:39:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=898563 00:06:51.224 15:39:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:51.224 15:39:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:51.224 15:39:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 898563 00:06:51.224 15:39:46 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 898563 ']' 00:06:51.224 15:39:46 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.224 15:39:46 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.224 15:39:46 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.224 15:39:46 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.224 15:39:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.224 [2024-12-09 15:39:46.415669] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:51.225 [2024-12-09 15:39:46.415740] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898563 ] 00:06:51.484 [2024-12-09 15:39:46.483438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.484 [2024-12-09 15:39:46.535274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.484 [2024-12-09 15:39:46.535350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.484 [2024-12-09 15:39:46.535432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.484 [2024-12-09 15:39:46.535558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.484 15:39:46 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.484 15:39:46 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:51.484 15:39:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:51.484 15:39:46 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.484 15:39:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.484 [2024-12-09 15:39:46.604150] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:51.484 [2024-12-09 15:39:46.604170] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:51.484 [2024-12-09 15:39:46.604181] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:51.484 [2024-12-09 15:39:46.604189] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:51.484 [2024-12-09 15:39:46.604196] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:51.484 15:39:46 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.484 15:39:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:51.484 15:39:46 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.484 15:39:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.484 [2024-12-09 15:39:46.679325] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:51.484 15:39:46 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.484 15:39:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:51.484 15:39:46 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.484 15:39:46 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.484 15:39:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.742 ************************************ 00:06:51.742 START TEST scheduler_create_thread 00:06:51.742 ************************************ 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.742 2 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.742 3 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.742 4 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.742 5 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.742 6 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.742 7 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.742 8 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.742 9 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.742 10 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.742 15:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.678 15:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.678 15:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:52.678 15:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.678 15:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.056 15:39:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.056 15:39:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:54.056 15:39:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:54.056 15:39:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.056 15:39:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.992 15:39:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.992 00:06:54.992 real 0m3.381s 00:06:54.992 user 0m0.024s 00:06:54.992 sys 0m0.007s 00:06:54.992 15:39:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.992 15:39:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.992 ************************************ 00:06:54.992 END TEST scheduler_create_thread 00:06:54.992 ************************************ 00:06:54.992 15:39:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:54.992 15:39:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 898563 00:06:54.992 15:39:50 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 898563 ']' 00:06:54.992 15:39:50 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 898563 00:06:54.992 15:39:50 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:54.992 15:39:50 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.992 15:39:50 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 898563 00:06:54.992 15:39:50 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:54.992 15:39:50 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:54.992 15:39:50 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 898563' 00:06:54.992 killing process with pid 898563 00:06:54.992 15:39:50 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 898563 00:06:54.992 15:39:50 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 898563 00:06:55.560 [2024-12-09 15:39:50.479297] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:55.560 00:06:55.560 real 0m4.465s 00:06:55.560 user 0m7.868s 00:06:55.560 sys 0m0.400s 00:06:55.560 15:39:50 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.561 15:39:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:55.561 ************************************ 00:06:55.561 END TEST event_scheduler 00:06:55.561 ************************************ 00:06:55.561 15:39:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:55.561 15:39:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:55.561 15:39:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.561 15:39:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.561 15:39:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.561 ************************************ 00:06:55.561 START TEST app_repeat 00:06:55.561 ************************************ 00:06:55.561 15:39:50 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:55.561 15:39:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.561 15:39:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.561 15:39:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:55.561 15:39:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.561 15:39:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:55.561 15:39:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:55.561 15:39:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:55.561 15:39:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=899210 00:06:55.561 15:39:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:55.561 15:39:50 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:55.561 15:39:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 899210' 00:06:55.561 Process app_repeat pid: 899210 00:06:55.561 15:39:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:55.561 15:39:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:55.561 spdk_app_start Round 0 00:06:55.561 15:39:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 899210 /var/tmp/spdk-nbd.sock 00:06:55.561 15:39:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 899210 ']' 00:06:55.561 15:39:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:55.561 15:39:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.561 15:39:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:55.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:55.561 15:39:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.561 15:39:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:55.821 [2024-12-09 15:39:50.799831] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:55.821 [2024-12-09 15:39:50.799925] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid899210 ] 00:06:55.821 [2024-12-09 15:39:50.874402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.821 [2024-12-09 15:39:50.920157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.821 [2024-12-09 15:39:50.920159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.821 15:39:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.821 15:39:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:55.821 15:39:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.080 Malloc0 00:06:56.080 15:39:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.339 Malloc1 00:06:56.339 15:39:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.339 15:39:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.339 15:39:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.339 15:39:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:56.339 15:39:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.339 15:39:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:56.339 15:39:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.339 15:39:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.339 15:39:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.339 15:39:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.339 15:39:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.339 15:39:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.339 15:39:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:56.339 15:39:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.339 15:39:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.339 15:39:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:56.598 /dev/nbd0 00:06:56.598 15:39:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.598 15:39:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.598 15:39:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:56.598 15:39:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:56.598 15:39:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.598 15:39:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.598 15:39:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:56.598 15:39:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:56.598 15:39:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.598 15:39:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.598 15:39:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.598 1+0 records in 00:06:56.598 1+0 records out 00:06:56.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215438 s, 19.0 MB/s 00:06:56.598 15:39:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:56.598 15:39:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:56.598 15:39:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:56.598 15:39:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.598 15:39:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:56.598 15:39:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.598 15:39:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.599 15:39:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:56.857 /dev/nbd1 00:06:56.857 15:39:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:56.857 15:39:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:56.857 15:39:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:56.857 15:39:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:56.857 15:39:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.857 15:39:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.857 15:39:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:56.857 15:39:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:56.857 15:39:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.857 15:39:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.857 15:39:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.857 1+0 records in 00:06:56.857 1+0 records out 00:06:56.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248883 s, 16.5 MB/s 00:06:56.858 15:39:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:56.858 15:39:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:56.858 15:39:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:56.858 15:39:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.858 15:39:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:56.858 15:39:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.858 15:39:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.858 15:39:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.858 15:39:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.858 15:39:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:57.117 { 00:06:57.117 "nbd_device": "/dev/nbd0", 00:06:57.117 "bdev_name": "Malloc0" 00:06:57.117 }, 00:06:57.117 { 00:06:57.117 "nbd_device": "/dev/nbd1", 00:06:57.117 "bdev_name": "Malloc1" 00:06:57.117 } 00:06:57.117 ]' 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:57.117 { 00:06:57.117 "nbd_device": "/dev/nbd0", 00:06:57.117 "bdev_name": "Malloc0" 00:06:57.117 }, 00:06:57.117 { 00:06:57.117 "nbd_device": "/dev/nbd1", 00:06:57.117 "bdev_name": "Malloc1" 00:06:57.117 } 00:06:57.117 ]' 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:57.117 /dev/nbd1' 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:57.117 /dev/nbd1' 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:57.117 256+0 records in 00:06:57.117 256+0 records out 00:06:57.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115603 s, 90.7 MB/s 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:57.117 256+0 records in 00:06:57.117 256+0 records out 00:06:57.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203047 s, 51.6 MB/s 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:57.117 256+0 records in 00:06:57.117 256+0 records out 00:06:57.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222057 s, 47.2 MB/s 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.117 15:39:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.376 15:39:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.376 15:39:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.376 15:39:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.376 15:39:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.376 15:39:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.376 15:39:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.376 15:39:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.376 15:39:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.376 15:39:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.376 15:39:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.635 15:39:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.635 15:39:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.635 15:39:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.635 15:39:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.635 15:39:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.635 15:39:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.635 15:39:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.635 15:39:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.635 15:39:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.635 15:39:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.635 15:39:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.894 15:39:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.894 15:39:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.894 15:39:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.894 15:39:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.894 15:39:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.894 15:39:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.894 15:39:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:57.894 15:39:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.894 15:39:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.894 15:39:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:57.894 15:39:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:57.894 15:39:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:57.894 15:39:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:58.153 15:39:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:58.153 [2024-12-09 15:39:53.284301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.153 [2024-12-09 15:39:53.327633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.153 [2024-12-09 15:39:53.327635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.153 [2024-12-09 15:39:53.367516] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:58.153 [2024-12-09 15:39:53.367563] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:01.447 15:39:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:01.447 15:39:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:01.447 spdk_app_start Round 1 00:07:01.447 15:39:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 899210 /var/tmp/spdk-nbd.sock 00:07:01.447 15:39:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 899210 ']' 00:07:01.447 15:39:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.447 15:39:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.447 15:39:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.447 15:39:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.447 15:39:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.447 15:39:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.447 15:39:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:01.447 15:39:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.447 Malloc0 00:07:01.447 15:39:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.707 Malloc1 00:07:01.707 15:39:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.707 15:39:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.707 15:39:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.707 15:39:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.707 15:39:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.707 15:39:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.707 15:39:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.707 15:39:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.707 15:39:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.707 15:39:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.707 15:39:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.707 15:39:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.707 15:39:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:01.707 15:39:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.707 15:39:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.707 15:39:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:01.707 /dev/nbd0 00:07:01.966 15:39:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:01.966 15:39:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:01.966 15:39:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:01.966 15:39:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:01.966 15:39:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.966 15:39:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.966 15:39:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:01.966 15:39:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:01.966 15:39:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.966 15:39:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.966 15:39:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.966 1+0 records in 00:07:01.966 1+0 records out 00:07:01.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070985 s, 5.8 MB/s 00:07:01.966 15:39:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:01.966 15:39:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:01.966 15:39:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:01.966 15:39:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.966 15:39:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:01.966 15:39:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.966 15:39:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.966 15:39:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:01.966 /dev/nbd1 00:07:01.966 15:39:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:01.966 15:39:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:01.967 15:39:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:01.967 15:39:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:01.967 15:39:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.967 15:39:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.967 15:39:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:02.226 15:39:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:02.226 15:39:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.226 15:39:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.226 15:39:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.226 1+0 records in 00:07:02.226 1+0 records out 00:07:02.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231743 s, 17.7 MB/s 00:07:02.226 15:39:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:02.226 15:39:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:02.226 15:39:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:02.226 15:39:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.226 15:39:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:02.226 15:39:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.226 15:39:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.226 15:39:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.226 15:39:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.226 15:39:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.226 15:39:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:02.226 { 00:07:02.226 "nbd_device": "/dev/nbd0", 00:07:02.226 "bdev_name": "Malloc0" 00:07:02.226 }, 00:07:02.226 { 00:07:02.226 "nbd_device": "/dev/nbd1", 00:07:02.226 "bdev_name": "Malloc1" 00:07:02.226 } 00:07:02.226 ]' 00:07:02.226 15:39:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:02.226 { 00:07:02.226 "nbd_device": "/dev/nbd0", 00:07:02.226 "bdev_name": "Malloc0" 00:07:02.226 }, 00:07:02.226 { 00:07:02.226 "nbd_device": "/dev/nbd1", 00:07:02.226 "bdev_name": "Malloc1" 00:07:02.226 } 00:07:02.226 ]' 00:07:02.226 15:39:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.226 15:39:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:02.226 /dev/nbd1' 00:07:02.226 15:39:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:02.226 /dev/nbd1' 00:07:02.226 15:39:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.226 15:39:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:02.226 15:39:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:02.226 15:39:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:02.226 15:39:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:02.226 15:39:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:02.226 15:39:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:02.486 256+0 records in 00:07:02.486 256+0 records out 00:07:02.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107184 s, 97.8 MB/s 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:02.486 256+0 records in 00:07:02.486 256+0 records out 00:07:02.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207343 s, 50.6 MB/s 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:02.486 256+0 records in 00:07:02.486 256+0 records out 00:07:02.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022184 s, 47.3 MB/s 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.486 15:39:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.745 15:39:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.745 15:39:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.745 15:39:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.745 15:39:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.745 15:39:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.745 15:39:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.745 15:39:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.745 15:39:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.745 15:39:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.745 15:39:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.745 15:39:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.745 15:39:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.745 15:39:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.745 15:39:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.745 15:39:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.745 15:39:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.004 15:39:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.004 15:39:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.004 15:39:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.004 15:39:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.004 15:39:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.005 15:39:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.005 15:39:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.005 15:39:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.005 15:39:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.005 15:39:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.005 15:39:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.005 15:39:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:03.005 15:39:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.005 15:39:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.005 15:39:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.005 15:39:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.005 15:39:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.005 15:39:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:03.264 15:39:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:03.524 [2024-12-09 15:39:58.586141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:03.524 [2024-12-09 15:39:58.629833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.524 [2024-12-09 15:39:58.629834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.524 [2024-12-09 15:39:58.669879] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:03.524 [2024-12-09 15:39:58.669924] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:06.815 15:40:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:06.815 15:40:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:06.815 spdk_app_start Round 2 00:07:06.815 15:40:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 899210 /var/tmp/spdk-nbd.sock 00:07:06.815 15:40:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 899210 ']' 00:07:06.815 15:40:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:06.815 15:40:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.815 15:40:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:06.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:06.815 15:40:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.815 15:40:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.815 15:40:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.815 15:40:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:06.815 15:40:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.815 Malloc0 00:07:06.815 15:40:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.815 Malloc1 00:07:06.815 15:40:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:06.815 15:40:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.815 15:40:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:06.815 15:40:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:06.815 15:40:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.815 15:40:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:06.815 15:40:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:06.815 15:40:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.815 15:40:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:06.815 15:40:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:06.815 15:40:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.815 15:40:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:06.815 15:40:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:06.815 15:40:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:06.815 15:40:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.815 15:40:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:07.074 /dev/nbd0 00:07:07.074 15:40:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.074 15:40:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.074 15:40:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:07.074 15:40:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:07.074 15:40:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.074 15:40:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.074 15:40:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:07.074 15:40:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:07.074 15:40:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.074 15:40:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.074 15:40:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.074 1+0 records in 00:07:07.074 1+0 records out 00:07:07.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223245 s, 18.3 MB/s 00:07:07.074 15:40:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:07.075 15:40:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:07.075 15:40:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:07.075 15:40:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.075 15:40:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:07.075 15:40:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.075 15:40:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.075 15:40:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:07.333 /dev/nbd1 00:07:07.333 15:40:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:07.333 15:40:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:07.333 15:40:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:07.333 15:40:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:07.333 15:40:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.333 15:40:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.333 15:40:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:07.333 15:40:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:07.333 15:40:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.333 15:40:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.333 15:40:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.333 1+0 records in 00:07:07.333 1+0 records out 00:07:07.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026744 s, 15.3 MB/s 00:07:07.333 15:40:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:07.333 15:40:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:07.333 15:40:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:07.333 15:40:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.333 15:40:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:07.333 15:40:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.333 15:40:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.333 15:40:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.333 15:40:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.333 15:40:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.592 15:40:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:07.592 { 00:07:07.592 "nbd_device": "/dev/nbd0", 00:07:07.592 "bdev_name": "Malloc0" 00:07:07.592 }, 00:07:07.592 { 00:07:07.592 "nbd_device": "/dev/nbd1", 00:07:07.592 "bdev_name": "Malloc1" 00:07:07.592 } 00:07:07.592 ]' 00:07:07.592 15:40:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:07.592 { 00:07:07.592 "nbd_device": "/dev/nbd0", 00:07:07.592 "bdev_name": "Malloc0" 00:07:07.592 }, 00:07:07.592 { 00:07:07.592 "nbd_device": "/dev/nbd1", 00:07:07.592 "bdev_name": "Malloc1" 00:07:07.592 } 00:07:07.592 ]' 00:07:07.592 15:40:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:07.593 /dev/nbd1' 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:07.593 /dev/nbd1' 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:07.593 256+0 records in 00:07:07.593 256+0 records out 00:07:07.593 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109773 s, 95.5 MB/s 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:07.593 256+0 records in 00:07:07.593 256+0 records out 00:07:07.593 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206382 s, 50.8 MB/s 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:07.593 256+0 records in 00:07:07.593 256+0 records out 00:07:07.593 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022145 s, 47.4 MB/s 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:07.593 15:40:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:07.852 15:40:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:07.852 15:40:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.852 15:40:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.852 15:40:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:07.852 15:40:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:07.852 15:40:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.852 15:40:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:07.852 15:40:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.852 15:40:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.852 15:40:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.852 15:40:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.852 15:40:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.852 15:40:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.852 15:40:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:07.852 15:40:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.852 15:40:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.852 15:40:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:08.111 15:40:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:08.111 15:40:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:08.111 15:40:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:08.111 15:40:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.111 15:40:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.111 15:40:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:08.111 15:40:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:08.111 15:40:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.111 15:40:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.111 15:40:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.111 15:40:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.369 15:40:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.369 15:40:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.369 15:40:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.369 15:40:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.369 15:40:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.369 15:40:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.369 15:40:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:08.369 15:40:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.369 15:40:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.369 15:40:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:08.369 15:40:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:08.369 15:40:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:08.369 15:40:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:08.628 15:40:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:08.887 [2024-12-09 15:40:03.856379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:08.887 [2024-12-09 15:40:03.899901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.887 [2024-12-09 15:40:03.899902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.887 [2024-12-09 15:40:03.939822] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:08.887 [2024-12-09 15:40:03.939877] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:12.173 15:40:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 899210 /var/tmp/spdk-nbd.sock 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 899210 ']' 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:12.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:12.173 15:40:06 event.app_repeat -- event/event.sh@39 -- # killprocess 899210 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 899210 ']' 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 899210 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 899210 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 899210' 00:07:12.173 killing process with pid 899210 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@973 -- # kill 899210 00:07:12.173 15:40:06 event.app_repeat -- common/autotest_common.sh@978 -- # wait 899210 00:07:12.173 spdk_app_start is called in Round 0. 00:07:12.173 Shutdown signal received, stop current app iteration 00:07:12.173 Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 reinitialization... 00:07:12.173 spdk_app_start is called in Round 1. 00:07:12.173 Shutdown signal received, stop current app iteration 00:07:12.173 Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 reinitialization... 00:07:12.173 spdk_app_start is called in Round 2. 00:07:12.173 Shutdown signal received, stop current app iteration 00:07:12.173 Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 reinitialization... 00:07:12.173 spdk_app_start is called in Round 3. 00:07:12.173 Shutdown signal received, stop current app iteration 00:07:12.173 15:40:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:12.173 15:40:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:12.173 00:07:12.173 real 0m16.318s 00:07:12.173 user 0m35.180s 00:07:12.173 sys 0m3.130s 00:07:12.173 15:40:07 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.173 15:40:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:12.173 ************************************ 00:07:12.173 END TEST app_repeat 00:07:12.173 ************************************ 00:07:12.173 15:40:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:12.173 15:40:07 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:12.173 15:40:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.173 15:40:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.173 15:40:07 event -- common/autotest_common.sh@10 -- # set +x 00:07:12.173 ************************************ 00:07:12.173 START TEST cpu_locks 00:07:12.173 ************************************ 00:07:12.173 15:40:07 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:12.173 * Looking for test storage... 00:07:12.173 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:07:12.173 15:40:07 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:12.173 15:40:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:07:12.173 15:40:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:12.173 15:40:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.173 15:40:07 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:12.173 15:40:07 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.173 15:40:07 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:12.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.173 --rc genhtml_branch_coverage=1 00:07:12.173 --rc genhtml_function_coverage=1 00:07:12.173 --rc genhtml_legend=1 00:07:12.173 --rc geninfo_all_blocks=1 00:07:12.173 --rc geninfo_unexecuted_blocks=1 00:07:12.173 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:12.173 ' 00:07:12.173 15:40:07 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:12.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.173 --rc genhtml_branch_coverage=1 00:07:12.173 --rc genhtml_function_coverage=1 00:07:12.173 --rc genhtml_legend=1 00:07:12.173 --rc geninfo_all_blocks=1 00:07:12.173 --rc geninfo_unexecuted_blocks=1 00:07:12.173 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:12.173 ' 00:07:12.173 15:40:07 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:12.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.173 --rc genhtml_branch_coverage=1 00:07:12.173 --rc genhtml_function_coverage=1 00:07:12.173 --rc genhtml_legend=1 00:07:12.173 --rc geninfo_all_blocks=1 00:07:12.173 --rc geninfo_unexecuted_blocks=1 00:07:12.173 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:12.173 ' 00:07:12.173 15:40:07 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:12.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.174 --rc genhtml_branch_coverage=1 00:07:12.174 --rc genhtml_function_coverage=1 00:07:12.174 --rc genhtml_legend=1 00:07:12.174 --rc geninfo_all_blocks=1 00:07:12.174 --rc geninfo_unexecuted_blocks=1 00:07:12.174 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:12.174 ' 00:07:12.174 15:40:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:12.174 15:40:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:12.174 15:40:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:12.174 15:40:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:12.174 15:40:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.174 15:40:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.174 15:40:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.174 ************************************ 00:07:12.174 START TEST default_locks 00:07:12.174 ************************************ 00:07:12.174 15:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:12.174 15:40:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=901556 00:07:12.174 15:40:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 901556 00:07:12.174 15:40:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.174 15:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 901556 ']' 00:07:12.174 15:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.174 15:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.174 15:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.174 15:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.174 15:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.433 [2024-12-09 15:40:07.407087] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:12.433 [2024-12-09 15:40:07.407145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901556 ] 00:07:12.433 [2024-12-09 15:40:07.478140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.433 [2024-12-09 15:40:07.521996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.692 15:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.692 15:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:12.692 15:40:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 901556 00:07:12.692 15:40:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 901556 00:07:12.692 15:40:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.259 lslocks: write error 00:07:13.259 15:40:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 901556 00:07:13.259 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 901556 ']' 00:07:13.259 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 901556 00:07:13.260 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:13.260 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.260 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 901556 00:07:13.260 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.260 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.260 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 901556' 00:07:13.260 killing process with pid 901556 00:07:13.260 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 901556 00:07:13.260 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 901556 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 901556 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 901556 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 901556 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 901556 ']' 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.519 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (901556) - No such process 00:07:13.519 ERROR: process (pid: 901556) is no longer running 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:13.519 00:07:13.519 real 0m1.277s 00:07:13.519 user 0m1.246s 00:07:13.519 sys 0m0.630s 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.519 15:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.519 ************************************ 00:07:13.519 END TEST default_locks 00:07:13.519 ************************************ 00:07:13.519 15:40:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:13.519 15:40:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.519 15:40:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.519 15:40:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.519 ************************************ 00:07:13.519 START TEST default_locks_via_rpc 00:07:13.519 ************************************ 00:07:13.519 15:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:13.520 15:40:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=901758 00:07:13.520 15:40:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 901758 00:07:13.520 15:40:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.520 15:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 901758 ']' 00:07:13.520 15:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.520 15:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.520 15:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.520 15:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.520 15:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.779 [2024-12-09 15:40:08.752551] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:13.779 [2024-12-09 15:40:08.752613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901758 ] 00:07:13.779 [2024-12-09 15:40:08.822446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.779 [2024-12-09 15:40:08.870569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.038 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.038 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:14.038 15:40:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:14.038 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.038 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.038 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.038 15:40:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:14.038 15:40:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:14.038 15:40:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:14.038 15:40:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:14.038 15:40:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:14.038 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.038 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.038 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.038 15:40:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 901758 00:07:14.038 15:40:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 901758 00:07:14.038 15:40:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.606 15:40:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 901758 00:07:14.606 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 901758 ']' 00:07:14.606 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 901758 00:07:14.606 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:14.606 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.606 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 901758 00:07:14.606 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.606 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.606 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 901758' 00:07:14.606 killing process with pid 901758 00:07:14.606 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 901758 00:07:14.606 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 901758 00:07:14.865 00:07:14.865 real 0m1.192s 00:07:14.865 user 0m1.163s 00:07:14.865 sys 0m0.539s 00:07:14.865 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.865 15:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.865 ************************************ 00:07:14.865 END TEST default_locks_via_rpc 00:07:14.865 ************************************ 00:07:14.865 15:40:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:14.865 15:40:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.865 15:40:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.865 15:40:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.865 ************************************ 00:07:14.865 START TEST non_locking_app_on_locked_coremask 00:07:14.865 ************************************ 00:07:14.865 15:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:14.865 15:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=901961 00:07:14.865 15:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 901961 /var/tmp/spdk.sock 00:07:14.865 15:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:14.865 15:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 901961 ']' 00:07:14.865 15:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.865 15:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.865 15:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.865 15:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.865 15:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.865 [2024-12-09 15:40:10.023449] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:14.865 [2024-12-09 15:40:10.023511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901961 ] 00:07:15.126 [2024-12-09 15:40:10.094021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.126 [2024-12-09 15:40:10.142398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.385 15:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.385 15:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:15.385 15:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:15.385 15:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=902080 00:07:15.385 15:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 902080 /var/tmp/spdk2.sock 00:07:15.385 15:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 902080 ']' 00:07:15.385 15:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.385 15:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.385 15:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.385 15:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.385 15:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.385 [2024-12-09 15:40:10.366765] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:15.385 [2024-12-09 15:40:10.366819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902080 ] 00:07:15.385 [2024-12-09 15:40:10.461880] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:15.385 [2024-12-09 15:40:10.461915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.385 [2024-12-09 15:40:10.563357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.321 15:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.321 15:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:16.321 15:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 901961 00:07:16.321 15:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 901961 00:07:16.321 15:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.888 lslocks: write error 00:07:16.888 15:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 901961 00:07:16.888 15:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 901961 ']' 00:07:16.888 15:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 901961 00:07:16.888 15:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:16.888 15:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.888 15:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 901961 00:07:16.888 15:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.888 15:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.888 15:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 901961' 00:07:16.888 killing process with pid 901961 00:07:16.888 15:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 901961 00:07:16.888 15:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 901961 00:07:17.456 15:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 902080 00:07:17.456 15:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 902080 ']' 00:07:17.456 15:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 902080 00:07:17.456 15:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:17.456 15:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.456 15:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 902080 00:07:17.456 15:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.456 15:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.456 15:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 902080' 00:07:17.456 killing process with pid 902080 00:07:17.456 15:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 902080 00:07:17.456 15:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 902080 00:07:17.715 00:07:17.715 real 0m2.878s 00:07:17.715 user 0m3.007s 00:07:17.715 sys 0m1.063s 00:07:17.715 15:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.715 15:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.715 ************************************ 00:07:17.715 END TEST non_locking_app_on_locked_coremask 00:07:17.715 ************************************ 00:07:17.715 15:40:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:17.715 15:40:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.715 15:40:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.715 15:40:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.975 ************************************ 00:07:17.975 START TEST locking_app_on_unlocked_coremask 00:07:17.975 ************************************ 00:07:17.975 15:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:17.975 15:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:17.975 15:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=902437 00:07:17.975 15:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 902437 /var/tmp/spdk.sock 00:07:17.975 15:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 902437 ']' 00:07:17.975 15:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.975 15:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.975 15:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.975 15:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.975 15:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.975 [2024-12-09 15:40:12.981176] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:17.975 [2024-12-09 15:40:12.981243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902437 ] 00:07:17.975 [2024-12-09 15:40:13.055209] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:17.975 [2024-12-09 15:40:13.055247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.975 [2024-12-09 15:40:13.103244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.235 15:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.235 15:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:18.235 15:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=902526 00:07:18.235 15:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 902526 /var/tmp/spdk2.sock 00:07:18.235 15:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:18.235 15:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 902526 ']' 00:07:18.235 15:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.235 15:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.235 15:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.235 15:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.235 15:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.235 [2024-12-09 15:40:13.340630] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:18.235 [2024-12-09 15:40:13.340695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902526 ] 00:07:18.235 [2024-12-09 15:40:13.434310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.494 [2024-12-09 15:40:13.522927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.062 15:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.062 15:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:19.062 15:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 902526 00:07:19.062 15:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 902526 00:07:19.062 15:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:20.438 lslocks: write error 00:07:20.438 15:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 902437 00:07:20.438 15:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 902437 ']' 00:07:20.438 15:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 902437 00:07:20.438 15:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:20.438 15:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.438 15:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 902437 00:07:20.438 15:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.438 15:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.439 15:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 902437' 00:07:20.439 killing process with pid 902437 00:07:20.439 15:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 902437 00:07:20.439 15:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 902437 00:07:21.006 15:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 902526 00:07:21.006 15:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 902526 ']' 00:07:21.006 15:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 902526 00:07:21.006 15:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:21.006 15:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.006 15:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 902526 00:07:21.006 15:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.006 15:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.006 15:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 902526' 00:07:21.006 killing process with pid 902526 00:07:21.006 15:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 902526 00:07:21.006 15:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 902526 00:07:21.266 00:07:21.266 real 0m3.421s 00:07:21.266 user 0m3.604s 00:07:21.266 sys 0m1.295s 00:07:21.266 15:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.266 15:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.266 ************************************ 00:07:21.266 END TEST locking_app_on_unlocked_coremask 00:07:21.266 ************************************ 00:07:21.266 15:40:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:21.266 15:40:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.266 15:40:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.266 15:40:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.266 ************************************ 00:07:21.266 START TEST locking_app_on_locked_coremask 00:07:21.266 ************************************ 00:07:21.266 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:21.266 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=902920 00:07:21.266 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 902920 /var/tmp/spdk.sock 00:07:21.266 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.266 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 902920 ']' 00:07:21.266 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.266 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.266 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.266 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.266 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.266 [2024-12-09 15:40:16.486630] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:21.266 [2024-12-09 15:40:16.486689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902920 ] 00:07:21.525 [2024-12-09 15:40:16.558684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.525 [2024-12-09 15:40:16.601653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=902942 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 902942 /var/tmp/spdk2.sock 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 902942 /var/tmp/spdk2.sock 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 902942 /var/tmp/spdk2.sock 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 902942 ']' 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.784 15:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.784 [2024-12-09 15:40:16.838763] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:21.784 [2024-12-09 15:40:16.838869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902942 ] 00:07:21.784 [2024-12-09 15:40:16.939131] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 902920 has claimed it. 00:07:21.784 [2024-12-09 15:40:16.939177] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:22.352 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (902942) - No such process 00:07:22.352 ERROR: process (pid: 902942) is no longer running 00:07:22.352 15:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.352 15:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:22.352 15:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:22.352 15:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.352 15:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:22.352 15:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.352 15:40:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 902920 00:07:22.352 15:40:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 902920 00:07:22.352 15:40:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:22.918 lslocks: write error 00:07:22.918 15:40:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 902920 00:07:22.918 15:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 902920 ']' 00:07:22.918 15:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 902920 00:07:23.177 15:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:23.177 15:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.177 15:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 902920 00:07:23.177 15:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.177 15:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.177 15:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 902920' 00:07:23.177 killing process with pid 902920 00:07:23.177 15:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 902920 00:07:23.177 15:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 902920 00:07:23.436 00:07:23.436 real 0m2.040s 00:07:23.436 user 0m2.190s 00:07:23.436 sys 0m0.765s 00:07:23.436 15:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.436 15:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.436 ************************************ 00:07:23.436 END TEST locking_app_on_locked_coremask 00:07:23.436 ************************************ 00:07:23.436 15:40:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:23.436 15:40:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.436 15:40:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.436 15:40:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.436 ************************************ 00:07:23.436 START TEST locking_overlapped_coremask 00:07:23.436 ************************************ 00:07:23.437 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:23.437 15:40:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:23.437 15:40:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=903295 00:07:23.437 15:40:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 903295 /var/tmp/spdk.sock 00:07:23.437 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 903295 ']' 00:07:23.437 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.437 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.437 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.437 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.437 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.437 [2024-12-09 15:40:18.602346] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:23.437 [2024-12-09 15:40:18.602420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903295 ] 00:07:23.696 [2024-12-09 15:40:18.677578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:23.696 [2024-12-09 15:40:18.727232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.696 [2024-12-09 15:40:18.727252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.696 [2024-12-09 15:40:18.727254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.954 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.954 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:23.954 15:40:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=903301 00:07:23.954 15:40:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 903301 /var/tmp/spdk2.sock 00:07:23.954 15:40:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:23.954 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:23.954 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 903301 /var/tmp/spdk2.sock 00:07:23.954 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:23.954 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.954 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:23.954 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.955 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 903301 /var/tmp/spdk2.sock 00:07:23.955 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 903301 ']' 00:07:23.955 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.955 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.955 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.955 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.955 15:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.955 [2024-12-09 15:40:18.967588] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:23.955 [2024-12-09 15:40:18.967673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903301 ] 00:07:23.955 [2024-12-09 15:40:19.064425] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 903295 has claimed it. 00:07:23.955 [2024-12-09 15:40:19.064468] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:24.522 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (903301) - No such process 00:07:24.522 ERROR: process (pid: 903301) is no longer running 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 903295 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 903295 ']' 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 903295 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 903295 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 903295' 00:07:24.522 killing process with pid 903295 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 903295 00:07:24.522 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 903295 00:07:24.781 00:07:24.781 real 0m1.406s 00:07:24.781 user 0m3.900s 00:07:24.781 sys 0m0.412s 00:07:24.781 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.781 15:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.781 ************************************ 00:07:24.781 END TEST locking_overlapped_coremask 00:07:24.782 ************************************ 00:07:25.041 15:40:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:25.041 15:40:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.041 15:40:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.041 15:40:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.041 ************************************ 00:07:25.041 START TEST locking_overlapped_coremask_via_rpc 00:07:25.041 ************************************ 00:07:25.041 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:25.041 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=903505 00:07:25.041 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 903505 /var/tmp/spdk.sock 00:07:25.041 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 903505 ']' 00:07:25.041 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.041 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.041 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.041 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.041 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.041 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:25.041 [2024-12-09 15:40:20.089237] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:25.041 [2024-12-09 15:40:20.089298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903505 ] 00:07:25.041 [2024-12-09 15:40:20.159941] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:25.041 [2024-12-09 15:40:20.159974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.041 [2024-12-09 15:40:20.209898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.041 [2024-12-09 15:40:20.209915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.041 [2024-12-09 15:40:20.209918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.300 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.300 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:25.300 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=903510 00:07:25.300 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 903510 /var/tmp/spdk2.sock 00:07:25.300 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:25.300 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 903510 ']' 00:07:25.300 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.300 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.301 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.301 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.301 15:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.301 [2024-12-09 15:40:20.451488] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:25.301 [2024-12-09 15:40:20.451568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903510 ] 00:07:25.560 [2024-12-09 15:40:20.548733] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:25.560 [2024-12-09 15:40:20.548764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.560 [2024-12-09 15:40:20.644368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.560 [2024-12-09 15:40:20.647888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.560 [2024-12-09 15:40:20.647890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:26.129 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.129 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:26.129 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:26.129 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.129 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.129 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.129 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:26.129 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:26.129 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:26.129 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:26.129 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.129 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:26.129 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.129 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:26.129 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.129 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.129 [2024-12-09 15:40:21.344906] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 903505 has claimed it. 00:07:26.129 request: 00:07:26.129 { 00:07:26.129 "method": "framework_enable_cpumask_locks", 00:07:26.129 "req_id": 1 00:07:26.129 } 00:07:26.129 Got JSON-RPC error response 00:07:26.129 response: 00:07:26.129 { 00:07:26.129 "code": -32603, 00:07:26.388 "message": "Failed to claim CPU core: 2" 00:07:26.388 } 00:07:26.388 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:26.388 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:26.388 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.388 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:26.388 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.388 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 903505 /var/tmp/spdk.sock 00:07:26.388 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 903505 ']' 00:07:26.388 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.388 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.388 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.388 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.388 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.389 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.389 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:26.389 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 903510 /var/tmp/spdk2.sock 00:07:26.389 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 903510 ']' 00:07:26.389 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:26.389 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.389 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:26.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:26.389 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.389 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.648 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.648 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:26.648 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:26.648 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:26.648 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:26.648 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:26.648 00:07:26.648 real 0m1.693s 00:07:26.648 user 0m0.794s 00:07:26.648 sys 0m0.171s 00:07:26.648 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.648 15:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.648 ************************************ 00:07:26.648 END TEST locking_overlapped_coremask_via_rpc 00:07:26.648 ************************************ 00:07:26.648 15:40:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:26.648 15:40:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 903505 ]] 00:07:26.648 15:40:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 903505 00:07:26.648 15:40:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 903505 ']' 00:07:26.648 15:40:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 903505 00:07:26.648 15:40:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:26.648 15:40:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.648 15:40:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 903505 00:07:26.648 15:40:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.648 15:40:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.648 15:40:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 903505' 00:07:26.648 killing process with pid 903505 00:07:26.648 15:40:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 903505 00:07:26.648 15:40:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 903505 00:07:27.217 15:40:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 903510 ]] 00:07:27.217 15:40:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 903510 00:07:27.217 15:40:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 903510 ']' 00:07:27.217 15:40:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 903510 00:07:27.217 15:40:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:27.217 15:40:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.217 15:40:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 903510 00:07:27.217 15:40:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:27.217 15:40:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:27.217 15:40:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 903510' 00:07:27.217 killing process with pid 903510 00:07:27.217 15:40:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 903510 00:07:27.217 15:40:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 903510 00:07:27.477 15:40:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:27.477 15:40:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:27.477 15:40:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 903505 ]] 00:07:27.477 15:40:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 903505 00:07:27.477 15:40:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 903505 ']' 00:07:27.477 15:40:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 903505 00:07:27.477 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (903505) - No such process 00:07:27.477 15:40:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 903505 is not found' 00:07:27.477 Process with pid 903505 is not found 00:07:27.477 15:40:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 903510 ]] 00:07:27.477 15:40:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 903510 00:07:27.477 15:40:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 903510 ']' 00:07:27.477 15:40:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 903510 00:07:27.477 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (903510) - No such process 00:07:27.477 15:40:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 903510 is not found' 00:07:27.478 Process with pid 903510 is not found 00:07:27.478 15:40:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:27.478 00:07:27.478 real 0m15.383s 00:07:27.478 user 0m25.794s 00:07:27.478 sys 0m5.927s 00:07:27.478 15:40:22 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.478 15:40:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.478 ************************************ 00:07:27.478 END TEST cpu_locks 00:07:27.478 ************************************ 00:07:27.478 00:07:27.478 real 0m40.398s 00:07:27.478 user 1m15.419s 00:07:27.478 sys 0m10.153s 00:07:27.478 15:40:22 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.478 15:40:22 event -- common/autotest_common.sh@10 -- # set +x 00:07:27.478 ************************************ 00:07:27.478 END TEST event 00:07:27.478 ************************************ 00:07:27.478 15:40:22 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:27.478 15:40:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.478 15:40:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.478 15:40:22 -- common/autotest_common.sh@10 -- # set +x 00:07:27.478 ************************************ 00:07:27.478 START TEST thread 00:07:27.478 ************************************ 00:07:27.478 15:40:22 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:27.737 * Looking for test storage... 00:07:27.737 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:07:27.737 15:40:22 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:27.737 15:40:22 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:27.737 15:40:22 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:27.737 15:40:22 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:27.737 15:40:22 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.737 15:40:22 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.737 15:40:22 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.737 15:40:22 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.737 15:40:22 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.737 15:40:22 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.737 15:40:22 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.737 15:40:22 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.737 15:40:22 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.737 15:40:22 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.737 15:40:22 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.737 15:40:22 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:27.737 15:40:22 thread -- scripts/common.sh@345 -- # : 1 00:07:27.737 15:40:22 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.737 15:40:22 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.737 15:40:22 thread -- scripts/common.sh@365 -- # decimal 1 00:07:27.737 15:40:22 thread -- scripts/common.sh@353 -- # local d=1 00:07:27.737 15:40:22 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.737 15:40:22 thread -- scripts/common.sh@355 -- # echo 1 00:07:27.737 15:40:22 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.737 15:40:22 thread -- scripts/common.sh@366 -- # decimal 2 00:07:27.737 15:40:22 thread -- scripts/common.sh@353 -- # local d=2 00:07:27.737 15:40:22 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.737 15:40:22 thread -- scripts/common.sh@355 -- # echo 2 00:07:27.737 15:40:22 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.737 15:40:22 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.737 15:40:22 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.738 15:40:22 thread -- scripts/common.sh@368 -- # return 0 00:07:27.738 15:40:22 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.738 15:40:22 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:27.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.738 --rc genhtml_branch_coverage=1 00:07:27.738 --rc genhtml_function_coverage=1 00:07:27.738 --rc genhtml_legend=1 00:07:27.738 --rc geninfo_all_blocks=1 00:07:27.738 --rc geninfo_unexecuted_blocks=1 00:07:27.738 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:27.738 ' 00:07:27.738 15:40:22 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:27.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.738 --rc genhtml_branch_coverage=1 00:07:27.738 --rc genhtml_function_coverage=1 00:07:27.738 --rc genhtml_legend=1 00:07:27.738 --rc geninfo_all_blocks=1 00:07:27.738 --rc geninfo_unexecuted_blocks=1 00:07:27.738 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:27.738 ' 00:07:27.738 15:40:22 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:27.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.738 --rc genhtml_branch_coverage=1 00:07:27.738 --rc genhtml_function_coverage=1 00:07:27.738 --rc genhtml_legend=1 00:07:27.738 --rc geninfo_all_blocks=1 00:07:27.738 --rc geninfo_unexecuted_blocks=1 00:07:27.738 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:27.738 ' 00:07:27.738 15:40:22 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:27.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.738 --rc genhtml_branch_coverage=1 00:07:27.738 --rc genhtml_function_coverage=1 00:07:27.738 --rc genhtml_legend=1 00:07:27.738 --rc geninfo_all_blocks=1 00:07:27.738 --rc geninfo_unexecuted_blocks=1 00:07:27.738 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:27.738 ' 00:07:27.738 15:40:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:27.738 15:40:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:27.738 15:40:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.738 15:40:22 thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.738 ************************************ 00:07:27.738 START TEST thread_poller_perf 00:07:27.738 ************************************ 00:07:27.738 15:40:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:27.738 [2024-12-09 15:40:22.908485] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:27.738 [2024-12-09 15:40:22.908567] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903970 ] 00:07:27.997 [2024-12-09 15:40:22.983850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.997 [2024-12-09 15:40:23.027885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.997 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:28.935 [2024-12-09T14:40:24.163Z] ====================================== 00:07:28.935 [2024-12-09T14:40:24.163Z] busy:2304118298 (cyc) 00:07:28.935 [2024-12-09T14:40:24.163Z] total_run_count: 827000 00:07:28.935 [2024-12-09T14:40:24.163Z] tsc_hz: 2300000000 (cyc) 00:07:28.935 [2024-12-09T14:40:24.163Z] ====================================== 00:07:28.935 [2024-12-09T14:40:24.163Z] poller_cost: 2786 (cyc), 1211 (nsec) 00:07:28.935 00:07:28.935 real 0m1.180s 00:07:28.935 user 0m1.097s 00:07:28.935 sys 0m0.079s 00:07:28.935 15:40:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.935 15:40:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:28.935 ************************************ 00:07:28.935 END TEST thread_poller_perf 00:07:28.935 ************************************ 00:07:28.935 15:40:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:28.935 15:40:24 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:28.935 15:40:24 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.935 15:40:24 thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.935 ************************************ 00:07:28.935 START TEST thread_poller_perf 00:07:28.935 ************************************ 00:07:28.935 15:40:24 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:29.195 [2024-12-09 15:40:24.165407] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:29.195 [2024-12-09 15:40:24.165516] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904163 ] 00:07:29.195 [2024-12-09 15:40:24.242100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.195 [2024-12-09 15:40:24.286131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.195 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:30.132 [2024-12-09T14:40:25.360Z] ====================================== 00:07:30.132 [2024-12-09T14:40:25.360Z] busy:2301200840 (cyc) 00:07:30.132 [2024-12-09T14:40:25.360Z] total_run_count: 12013000 00:07:30.132 [2024-12-09T14:40:25.360Z] tsc_hz: 2300000000 (cyc) 00:07:30.132 [2024-12-09T14:40:25.360Z] ====================================== 00:07:30.132 [2024-12-09T14:40:25.360Z] poller_cost: 191 (cyc), 83 (nsec) 00:07:30.132 00:07:30.132 real 0m1.175s 00:07:30.132 user 0m1.095s 00:07:30.132 sys 0m0.076s 00:07:30.132 15:40:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.132 15:40:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:30.132 ************************************ 00:07:30.132 END TEST thread_poller_perf 00:07:30.132 ************************************ 00:07:30.391 15:40:25 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:07:30.391 15:40:25 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:30.391 15:40:25 thread -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.391 15:40:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.391 15:40:25 thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.391 ************************************ 00:07:30.391 START TEST thread_spdk_lock 00:07:30.391 ************************************ 00:07:30.391 15:40:25 thread.thread_spdk_lock -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:30.391 [2024-12-09 15:40:25.429303] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:30.391 [2024-12-09 15:40:25.429387] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904357 ] 00:07:30.391 [2024-12-09 15:40:25.506768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.391 [2024-12-09 15:40:25.553961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.391 [2024-12-09 15:40:25.553963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.960 [2024-12-09 15:40:26.041130] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 989:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:30.960 [2024-12-09 15:40:26.041165] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3140:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:30.960 [2024-12-09 15:40:26.041176] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3095:sspin_stacks_print: *ERROR*: spinlock 0x14de980 00:07:30.960 [2024-12-09 15:40:26.041898] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 884:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:30.960 [2024-12-09 15:40:26.042002] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1050:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:30.960 [2024-12-09 15:40:26.042021] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 884:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:30.960 Starting test contend 00:07:30.960 Worker Delay Wait us Hold us Total us 00:07:30.960 0 3 166060 183545 349605 00:07:30.960 1 5 81666 283523 365190 00:07:30.960 PASS test contend 00:07:30.960 Starting test hold_by_poller 00:07:30.960 PASS test hold_by_poller 00:07:30.960 Starting test hold_by_message 00:07:30.960 PASS test hold_by_message 00:07:30.960 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:07:30.960 100014 assertions passed 00:07:30.960 0 assertions failed 00:07:30.960 00:07:30.960 real 0m0.670s 00:07:30.960 user 0m1.065s 00:07:30.960 sys 0m0.090s 00:07:30.960 15:40:26 thread.thread_spdk_lock -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.960 15:40:26 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:07:30.960 ************************************ 00:07:30.960 END TEST thread_spdk_lock 00:07:30.960 ************************************ 00:07:30.960 00:07:30.960 real 0m3.461s 00:07:30.960 user 0m3.446s 00:07:30.960 sys 0m0.521s 00:07:30.960 15:40:26 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.960 15:40:26 thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.960 ************************************ 00:07:30.960 END TEST thread 00:07:30.960 ************************************ 00:07:30.960 15:40:26 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:30.960 15:40:26 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:30.960 15:40:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.960 15:40:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.960 15:40:26 -- common/autotest_common.sh@10 -- # set +x 00:07:31.219 ************************************ 00:07:31.219 START TEST app_cmdline 00:07:31.219 ************************************ 00:07:31.219 15:40:26 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:31.219 * Looking for test storage... 00:07:31.219 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:31.219 15:40:26 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:31.219 15:40:26 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:31.219 15:40:26 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:31.219 15:40:26 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.219 15:40:26 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:31.219 15:40:26 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.219 15:40:26 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:31.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.219 --rc genhtml_branch_coverage=1 00:07:31.219 --rc genhtml_function_coverage=1 00:07:31.219 --rc genhtml_legend=1 00:07:31.219 --rc geninfo_all_blocks=1 00:07:31.219 --rc geninfo_unexecuted_blocks=1 00:07:31.219 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:31.219 ' 00:07:31.219 15:40:26 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:31.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.219 --rc genhtml_branch_coverage=1 00:07:31.219 --rc genhtml_function_coverage=1 00:07:31.219 --rc genhtml_legend=1 00:07:31.219 --rc geninfo_all_blocks=1 00:07:31.219 --rc geninfo_unexecuted_blocks=1 00:07:31.219 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:31.219 ' 00:07:31.219 15:40:26 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:31.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.219 --rc genhtml_branch_coverage=1 00:07:31.219 --rc genhtml_function_coverage=1 00:07:31.219 --rc genhtml_legend=1 00:07:31.219 --rc geninfo_all_blocks=1 00:07:31.219 --rc geninfo_unexecuted_blocks=1 00:07:31.219 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:31.219 ' 00:07:31.219 15:40:26 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:31.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.219 --rc genhtml_branch_coverage=1 00:07:31.219 --rc genhtml_function_coverage=1 00:07:31.219 --rc genhtml_legend=1 00:07:31.219 --rc geninfo_all_blocks=1 00:07:31.219 --rc geninfo_unexecuted_blocks=1 00:07:31.219 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:31.219 ' 00:07:31.219 15:40:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:31.219 15:40:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=904532 00:07:31.219 15:40:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 904532 00:07:31.219 15:40:26 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 904532 ']' 00:07:31.219 15:40:26 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.219 15:40:26 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.219 15:40:26 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.219 15:40:26 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.219 15:40:26 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:31.219 15:40:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:31.219 [2024-12-09 15:40:26.383345] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:31.219 [2024-12-09 15:40:26.383436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904532 ] 00:07:31.477 [2024-12-09 15:40:26.455412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.478 [2024-12-09 15:40:26.503125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.736 15:40:26 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.736 15:40:26 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:31.736 15:40:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:31.736 { 00:07:31.736 "version": "SPDK v25.01-pre git sha1 b8248e28c", 00:07:31.736 "fields": { 00:07:31.736 "major": 25, 00:07:31.736 "minor": 1, 00:07:31.736 "patch": 0, 00:07:31.736 "suffix": "-pre", 00:07:31.736 "commit": "b8248e28c" 00:07:31.736 } 00:07:31.736 } 00:07:31.736 15:40:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:31.737 15:40:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:31.737 15:40:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:31.737 15:40:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:31.737 15:40:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:31.737 15:40:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:31.737 15:40:26 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.737 15:40:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:31.737 15:40:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:31.737 15:40:26 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.737 15:40:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:31.737 15:40:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:31.737 15:40:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:31.737 15:40:26 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:31.737 15:40:26 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:31.737 15:40:26 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:31.737 15:40:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.737 15:40:26 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:31.737 15:40:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.737 15:40:26 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:31.737 15:40:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.737 15:40:26 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:31.737 15:40:26 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:31.737 15:40:26 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:31.996 request: 00:07:31.996 { 00:07:31.996 "method": "env_dpdk_get_mem_stats", 00:07:31.996 "req_id": 1 00:07:31.996 } 00:07:31.996 Got JSON-RPC error response 00:07:31.996 response: 00:07:31.996 { 00:07:31.996 "code": -32601, 00:07:31.996 "message": "Method not found" 00:07:31.996 } 00:07:31.996 15:40:27 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:31.996 15:40:27 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:31.996 15:40:27 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:31.996 15:40:27 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:31.996 15:40:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 904532 00:07:31.996 15:40:27 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 904532 ']' 00:07:31.996 15:40:27 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 904532 00:07:31.996 15:40:27 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:31.996 15:40:27 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.996 15:40:27 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 904532 00:07:31.996 15:40:27 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.996 15:40:27 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.996 15:40:27 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 904532' 00:07:31.996 killing process with pid 904532 00:07:31.996 15:40:27 app_cmdline -- common/autotest_common.sh@973 -- # kill 904532 00:07:31.996 15:40:27 app_cmdline -- common/autotest_common.sh@978 -- # wait 904532 00:07:32.564 00:07:32.564 real 0m1.311s 00:07:32.564 user 0m1.503s 00:07:32.564 sys 0m0.485s 00:07:32.564 15:40:27 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.565 15:40:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:32.565 ************************************ 00:07:32.565 END TEST app_cmdline 00:07:32.565 ************************************ 00:07:32.565 15:40:27 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:32.565 15:40:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.565 15:40:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.565 15:40:27 -- common/autotest_common.sh@10 -- # set +x 00:07:32.565 ************************************ 00:07:32.565 START TEST version 00:07:32.565 ************************************ 00:07:32.565 15:40:27 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:32.565 * Looking for test storage... 00:07:32.565 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:32.565 15:40:27 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:32.565 15:40:27 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:32.565 15:40:27 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:32.565 15:40:27 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:32.565 15:40:27 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.565 15:40:27 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.565 15:40:27 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.565 15:40:27 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.565 15:40:27 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.565 15:40:27 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.565 15:40:27 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.565 15:40:27 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.565 15:40:27 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.565 15:40:27 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.565 15:40:27 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.565 15:40:27 version -- scripts/common.sh@344 -- # case "$op" in 00:07:32.565 15:40:27 version -- scripts/common.sh@345 -- # : 1 00:07:32.565 15:40:27 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.565 15:40:27 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.565 15:40:27 version -- scripts/common.sh@365 -- # decimal 1 00:07:32.565 15:40:27 version -- scripts/common.sh@353 -- # local d=1 00:07:32.565 15:40:27 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.565 15:40:27 version -- scripts/common.sh@355 -- # echo 1 00:07:32.565 15:40:27 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.565 15:40:27 version -- scripts/common.sh@366 -- # decimal 2 00:07:32.565 15:40:27 version -- scripts/common.sh@353 -- # local d=2 00:07:32.565 15:40:27 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.565 15:40:27 version -- scripts/common.sh@355 -- # echo 2 00:07:32.565 15:40:27 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.565 15:40:27 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.565 15:40:27 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.565 15:40:27 version -- scripts/common.sh@368 -- # return 0 00:07:32.565 15:40:27 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.565 15:40:27 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:32.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.565 --rc genhtml_branch_coverage=1 00:07:32.565 --rc genhtml_function_coverage=1 00:07:32.565 --rc genhtml_legend=1 00:07:32.565 --rc geninfo_all_blocks=1 00:07:32.565 --rc geninfo_unexecuted_blocks=1 00:07:32.565 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.565 ' 00:07:32.565 15:40:27 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:32.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.565 --rc genhtml_branch_coverage=1 00:07:32.565 --rc genhtml_function_coverage=1 00:07:32.565 --rc genhtml_legend=1 00:07:32.565 --rc geninfo_all_blocks=1 00:07:32.565 --rc geninfo_unexecuted_blocks=1 00:07:32.565 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.565 ' 00:07:32.565 15:40:27 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:32.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.565 --rc genhtml_branch_coverage=1 00:07:32.565 --rc genhtml_function_coverage=1 00:07:32.565 --rc genhtml_legend=1 00:07:32.565 --rc geninfo_all_blocks=1 00:07:32.565 --rc geninfo_unexecuted_blocks=1 00:07:32.565 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.565 ' 00:07:32.565 15:40:27 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:32.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.565 --rc genhtml_branch_coverage=1 00:07:32.565 --rc genhtml_function_coverage=1 00:07:32.565 --rc genhtml_legend=1 00:07:32.565 --rc geninfo_all_blocks=1 00:07:32.565 --rc geninfo_unexecuted_blocks=1 00:07:32.565 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.565 ' 00:07:32.565 15:40:27 version -- app/version.sh@17 -- # get_header_version major 00:07:32.565 15:40:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:32.565 15:40:27 version -- app/version.sh@14 -- # cut -f2 00:07:32.565 15:40:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.565 15:40:27 version -- app/version.sh@17 -- # major=25 00:07:32.565 15:40:27 version -- app/version.sh@18 -- # get_header_version minor 00:07:32.565 15:40:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:32.565 15:40:27 version -- app/version.sh@14 -- # cut -f2 00:07:32.565 15:40:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.565 15:40:27 version -- app/version.sh@18 -- # minor=1 00:07:32.565 15:40:27 version -- app/version.sh@19 -- # get_header_version patch 00:07:32.565 15:40:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:32.565 15:40:27 version -- app/version.sh@14 -- # cut -f2 00:07:32.565 15:40:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.565 15:40:27 version -- app/version.sh@19 -- # patch=0 00:07:32.565 15:40:27 version -- app/version.sh@20 -- # get_header_version suffix 00:07:32.825 15:40:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:32.825 15:40:27 version -- app/version.sh@14 -- # cut -f2 00:07:32.825 15:40:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.825 15:40:27 version -- app/version.sh@20 -- # suffix=-pre 00:07:32.825 15:40:27 version -- app/version.sh@22 -- # version=25.1 00:07:32.825 15:40:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:32.825 15:40:27 version -- app/version.sh@28 -- # version=25.1rc0 00:07:32.825 15:40:27 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:32.825 15:40:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:32.825 15:40:27 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:32.825 15:40:27 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:32.825 00:07:32.825 real 0m0.264s 00:07:32.825 user 0m0.149s 00:07:32.825 sys 0m0.170s 00:07:32.825 15:40:27 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.825 15:40:27 version -- common/autotest_common.sh@10 -- # set +x 00:07:32.825 ************************************ 00:07:32.825 END TEST version 00:07:32.825 ************************************ 00:07:32.825 15:40:27 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:32.825 15:40:27 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:32.825 15:40:27 -- spdk/autotest.sh@194 -- # uname -s 00:07:32.825 15:40:27 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:32.825 15:40:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:32.825 15:40:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:32.825 15:40:27 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:32.825 15:40:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:32.825 15:40:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:32.825 15:40:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:32.825 15:40:27 -- common/autotest_common.sh@10 -- # set +x 00:07:32.825 15:40:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:32.825 15:40:27 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:32.825 15:40:27 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:07:32.825 15:40:27 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:07:32.825 15:40:27 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:07:32.825 15:40:27 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:07:32.825 15:40:27 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:07:32.825 15:40:27 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:07:32.825 15:40:27 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:07:32.825 15:40:27 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:07:32.825 15:40:27 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:07:32.825 15:40:27 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:07:32.825 15:40:27 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:07:32.825 15:40:27 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:07:32.825 15:40:27 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:07:32.825 15:40:27 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:07:32.825 15:40:27 -- spdk/autotest.sh@374 -- # [[ 1 -eq 1 ]] 00:07:32.825 15:40:27 -- spdk/autotest.sh@375 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:32.825 15:40:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.825 15:40:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.825 15:40:27 -- common/autotest_common.sh@10 -- # set +x 00:07:32.825 ************************************ 00:07:32.825 START TEST llvm_fuzz 00:07:32.825 ************************************ 00:07:32.825 15:40:27 llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:33.085 * Looking for test storage... 00:07:33.085 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:07:33.085 15:40:28 llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:33.085 15:40:28 llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:07:33.085 15:40:28 llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:33.085 15:40:28 llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.085 15:40:28 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:33.085 15:40:28 llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.085 15:40:28 llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:33.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.086 --rc genhtml_branch_coverage=1 00:07:33.086 --rc genhtml_function_coverage=1 00:07:33.086 --rc genhtml_legend=1 00:07:33.086 --rc geninfo_all_blocks=1 00:07:33.086 --rc geninfo_unexecuted_blocks=1 00:07:33.086 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:33.086 ' 00:07:33.086 15:40:28 llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:33.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.086 --rc genhtml_branch_coverage=1 00:07:33.086 --rc genhtml_function_coverage=1 00:07:33.086 --rc genhtml_legend=1 00:07:33.086 --rc geninfo_all_blocks=1 00:07:33.086 --rc geninfo_unexecuted_blocks=1 00:07:33.086 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:33.086 ' 00:07:33.086 15:40:28 llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:33.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.086 --rc genhtml_branch_coverage=1 00:07:33.086 --rc genhtml_function_coverage=1 00:07:33.086 --rc genhtml_legend=1 00:07:33.086 --rc geninfo_all_blocks=1 00:07:33.086 --rc geninfo_unexecuted_blocks=1 00:07:33.086 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:33.086 ' 00:07:33.086 15:40:28 llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:33.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.086 --rc genhtml_branch_coverage=1 00:07:33.086 --rc genhtml_function_coverage=1 00:07:33.086 --rc genhtml_legend=1 00:07:33.086 --rc geninfo_all_blocks=1 00:07:33.086 --rc geninfo_unexecuted_blocks=1 00:07:33.086 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:33.086 ' 00:07:33.086 15:40:28 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:07:33.086 15:40:28 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:07:33.086 15:40:28 llvm_fuzz -- common/autotest_common.sh@550 -- # fuzzers=() 00:07:33.086 15:40:28 llvm_fuzz -- common/autotest_common.sh@550 -- # local fuzzers 00:07:33.086 15:40:28 llvm_fuzz -- common/autotest_common.sh@552 -- # [[ -n '' ]] 00:07:33.086 15:40:28 llvm_fuzz -- common/autotest_common.sh@555 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:07:33.086 15:40:28 llvm_fuzz -- common/autotest_common.sh@556 -- # fuzzers=("${fuzzers[@]##*/}") 00:07:33.086 15:40:28 llvm_fuzz -- common/autotest_common.sh@559 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:07:33.086 15:40:28 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:33.086 15:40:28 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:33.086 15:40:28 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:33.086 15:40:28 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:33.086 15:40:28 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:33.086 15:40:28 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:33.086 15:40:28 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:33.086 15:40:28 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:33.086 15:40:28 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:33.086 15:40:28 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.086 15:40:28 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.086 15:40:28 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:33.086 ************************************ 00:07:33.086 START TEST nvmf_llvm_fuzz 00:07:33.086 ************************************ 00:07:33.086 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:33.086 * Looking for test storage... 00:07:33.086 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:33.086 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:33.086 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:07:33.086 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:33.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.349 --rc genhtml_branch_coverage=1 00:07:33.349 --rc genhtml_function_coverage=1 00:07:33.349 --rc genhtml_legend=1 00:07:33.349 --rc geninfo_all_blocks=1 00:07:33.349 --rc geninfo_unexecuted_blocks=1 00:07:33.349 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:33.349 ' 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:33.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.349 --rc genhtml_branch_coverage=1 00:07:33.349 --rc genhtml_function_coverage=1 00:07:33.349 --rc genhtml_legend=1 00:07:33.349 --rc geninfo_all_blocks=1 00:07:33.349 --rc geninfo_unexecuted_blocks=1 00:07:33.349 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:33.349 ' 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:33.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.349 --rc genhtml_branch_coverage=1 00:07:33.349 --rc genhtml_function_coverage=1 00:07:33.349 --rc genhtml_legend=1 00:07:33.349 --rc geninfo_all_blocks=1 00:07:33.349 --rc geninfo_unexecuted_blocks=1 00:07:33.349 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:33.349 ' 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:33.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.349 --rc genhtml_branch_coverage=1 00:07:33.349 --rc genhtml_function_coverage=1 00:07:33.349 --rc genhtml_legend=1 00:07:33.349 --rc geninfo_all_blocks=1 00:07:33.349 --rc geninfo_unexecuted_blocks=1 00:07:33.349 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:33.349 ' 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:33.349 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:33.350 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:33.350 #define SPDK_CONFIG_H 00:07:33.350 #define SPDK_CONFIG_AIO_FSDEV 1 00:07:33.350 #define SPDK_CONFIG_APPS 1 00:07:33.350 #define SPDK_CONFIG_ARCH native 00:07:33.350 #undef SPDK_CONFIG_ASAN 00:07:33.350 #undef SPDK_CONFIG_AVAHI 00:07:33.350 #undef SPDK_CONFIG_CET 00:07:33.350 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:07:33.350 #define SPDK_CONFIG_COVERAGE 1 00:07:33.350 #define SPDK_CONFIG_CROSS_PREFIX 00:07:33.350 #undef SPDK_CONFIG_CRYPTO 00:07:33.350 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:33.350 #undef SPDK_CONFIG_CUSTOMOCF 00:07:33.350 #undef SPDK_CONFIG_DAOS 00:07:33.350 #define SPDK_CONFIG_DAOS_DIR 00:07:33.350 #define SPDK_CONFIG_DEBUG 1 00:07:33.350 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:33.350 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:33.350 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:33.350 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:33.350 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:33.350 #undef SPDK_CONFIG_DPDK_UADK 00:07:33.350 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:33.350 #define SPDK_CONFIG_EXAMPLES 1 00:07:33.350 #undef SPDK_CONFIG_FC 00:07:33.350 #define SPDK_CONFIG_FC_PATH 00:07:33.350 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:33.350 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:33.350 #define SPDK_CONFIG_FSDEV 1 00:07:33.350 #undef SPDK_CONFIG_FUSE 00:07:33.350 #define SPDK_CONFIG_FUZZER 1 00:07:33.350 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:33.350 #undef SPDK_CONFIG_GOLANG 00:07:33.350 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:33.350 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:33.350 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:33.350 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:33.350 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:33.350 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:33.350 #undef SPDK_CONFIG_HAVE_LZ4 00:07:33.350 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:07:33.350 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:07:33.350 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:33.350 #define SPDK_CONFIG_IDXD 1 00:07:33.350 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:33.350 #undef SPDK_CONFIG_IPSEC_MB 00:07:33.350 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:33.350 #define SPDK_CONFIG_ISAL 1 00:07:33.350 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:33.350 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:33.350 #define SPDK_CONFIG_LIBDIR 00:07:33.350 #undef SPDK_CONFIG_LTO 00:07:33.350 #define SPDK_CONFIG_MAX_LCORES 128 00:07:33.350 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:07:33.350 #define SPDK_CONFIG_NVME_CUSE 1 00:07:33.350 #undef SPDK_CONFIG_OCF 00:07:33.350 #define SPDK_CONFIG_OCF_PATH 00:07:33.350 #define SPDK_CONFIG_OPENSSL_PATH 00:07:33.350 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:33.350 #define SPDK_CONFIG_PGO_DIR 00:07:33.350 #undef SPDK_CONFIG_PGO_USE 00:07:33.350 #define SPDK_CONFIG_PREFIX /usr/local 00:07:33.350 #undef SPDK_CONFIG_RAID5F 00:07:33.350 #undef SPDK_CONFIG_RBD 00:07:33.350 #define SPDK_CONFIG_RDMA 1 00:07:33.350 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:33.350 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:33.350 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:33.350 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:33.350 #undef SPDK_CONFIG_SHARED 00:07:33.350 #undef SPDK_CONFIG_SMA 00:07:33.350 #define SPDK_CONFIG_TESTS 1 00:07:33.350 #undef SPDK_CONFIG_TSAN 00:07:33.350 #define SPDK_CONFIG_UBLK 1 00:07:33.350 #define SPDK_CONFIG_UBSAN 1 00:07:33.350 #undef SPDK_CONFIG_UNIT_TESTS 00:07:33.350 #undef SPDK_CONFIG_URING 00:07:33.350 #define SPDK_CONFIG_URING_PATH 00:07:33.350 #undef SPDK_CONFIG_URING_ZNS 00:07:33.351 #undef SPDK_CONFIG_USDT 00:07:33.351 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:33.351 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:33.351 #define SPDK_CONFIG_VFIO_USER 1 00:07:33.351 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:33.351 #define SPDK_CONFIG_VHOST 1 00:07:33.351 #define SPDK_CONFIG_VIRTIO 1 00:07:33.351 #undef SPDK_CONFIG_VTUNE 00:07:33.351 #define SPDK_CONFIG_VTUNE_DIR 00:07:33.351 #define SPDK_CONFIG_WERROR 1 00:07:33.351 #define SPDK_CONFIG_WPDK_DIR 00:07:33.351 #undef SPDK_CONFIG_XNVME 00:07:33.351 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:33.351 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:33.352 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j72 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 904957 ]] 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 904957 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.38O6mM 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.38O6mM/tests/nvmf /tmp/spdk.38O6mM 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=86726463488 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=94500372480 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=7773908992 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=47245422592 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=47250186240 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=18894340096 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=18900074496 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5734400 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=47249788928 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=47250186240 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=397312 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=9450024960 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=9450037248 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:33.353 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:07:33.354 * Looking for test storage... 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=86726463488 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=9988501504 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:33.354 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1698 -- # set -o errtrace 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1703 -- # true 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # xtrace_fd 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:07:33.354 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:33.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.614 --rc genhtml_branch_coverage=1 00:07:33.614 --rc genhtml_function_coverage=1 00:07:33.614 --rc genhtml_legend=1 00:07:33.614 --rc geninfo_all_blocks=1 00:07:33.614 --rc geninfo_unexecuted_blocks=1 00:07:33.614 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:33.614 ' 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:33.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.614 --rc genhtml_branch_coverage=1 00:07:33.614 --rc genhtml_function_coverage=1 00:07:33.614 --rc genhtml_legend=1 00:07:33.614 --rc geninfo_all_blocks=1 00:07:33.614 --rc geninfo_unexecuted_blocks=1 00:07:33.614 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:33.614 ' 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:33.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.614 --rc genhtml_branch_coverage=1 00:07:33.614 --rc genhtml_function_coverage=1 00:07:33.614 --rc genhtml_legend=1 00:07:33.614 --rc geninfo_all_blocks=1 00:07:33.614 --rc geninfo_unexecuted_blocks=1 00:07:33.614 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:33.614 ' 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:33.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.614 --rc genhtml_branch_coverage=1 00:07:33.614 --rc genhtml_function_coverage=1 00:07:33.614 --rc genhtml_legend=1 00:07:33.614 --rc geninfo_all_blocks=1 00:07:33.614 --rc geninfo_unexecuted_blocks=1 00:07:33.614 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:33.614 ' 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:33.614 15:40:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:07:33.614 [2024-12-09 15:40:28.674029] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:33.614 [2024-12-09 15:40:28.674110] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905013 ] 00:07:33.874 [2024-12-09 15:40:28.950426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.874 [2024-12-09 15:40:29.002057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.874 [2024-12-09 15:40:29.061325] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.874 [2024-12-09 15:40:29.077470] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:07:33.874 INFO: Running with entropic power schedule (0xFF, 100). 00:07:33.874 INFO: Seed: 3986628424 00:07:34.133 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:07:34.133 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:07:34.133 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:34.133 INFO: A corpus is not provided, starting from an empty corpus 00:07:34.133 #2 INITED exec/s: 0 rss: 66Mb 00:07:34.133 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:34.133 This may also happen if the target rejected all inputs we tried so far 00:07:34.133 [2024-12-09 15:40:29.142975] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.133 [2024-12-09 15:40:29.143008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.392 NEW_FUNC[1/716]: 0x43bbe8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:07:34.392 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:34.392 #48 NEW cov: 12097 ft: 12098 corp: 2/110b lim: 320 exec/s: 0 rss: 73Mb L: 109/109 MS: 1 InsertRepeatedBytes- 00:07:34.392 [2024-12-09 15:40:29.484026] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.392 [2024-12-09 15:40:29.484085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.392 #49 NEW cov: 12227 ft: 12808 corp: 3/219b lim: 320 exec/s: 0 rss: 74Mb L: 109/109 MS: 1 ChangeBinInt- 00:07:34.392 [2024-12-09 15:40:29.554027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (35) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.392 [2024-12-09 15:40:29.554056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.392 NEW_FUNC[1/1]: 0x1976768 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:07:34.392 #51 NEW cov: 12268 ft: 13442 corp: 4/296b lim: 320 exec/s: 0 rss: 74Mb L: 77/109 MS: 2 InsertByte-CrossOver- 00:07:34.392 [2024-12-09 15:40:29.594158] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.392 [2024-12-09 15:40:29.594185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.392 [2024-12-09 15:40:29.594236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:f5f5f5f5 cdw11:f5f5f5f5 00:07:34.392 [2024-12-09 15:40:29.594250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.652 #52 NEW cov: 12356 ft: 13858 corp: 5/477b lim: 320 exec/s: 0 rss: 74Mb L: 181/181 MS: 1 InsertRepeatedBytes- 00:07:34.652 [2024-12-09 15:40:29.654171] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.652 [2024-12-09 15:40:29.654197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.652 #53 NEW cov: 12356 ft: 13982 corp: 6/586b lim: 320 exec/s: 0 rss: 74Mb L: 109/181 MS: 1 ChangeBinInt- 00:07:34.652 [2024-12-09 15:40:29.694348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (35) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.652 [2024-12-09 15:40:29.694373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.652 #54 NEW cov: 12356 ft: 14073 corp: 7/663b lim: 320 exec/s: 0 rss: 74Mb L: 77/181 MS: 1 CrossOver- 00:07:34.652 [2024-12-09 15:40:29.754517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (35) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.652 [2024-12-09 15:40:29.754543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.652 #55 NEW cov: 12356 ft: 14116 corp: 8/733b lim: 320 exec/s: 0 rss: 74Mb L: 70/181 MS: 1 EraseBytes- 00:07:34.652 [2024-12-09 15:40:29.814662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (35) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.652 [2024-12-09 15:40:29.814687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.652 #56 NEW cov: 12356 ft: 14148 corp: 9/832b lim: 320 exec/s: 0 rss: 74Mb L: 99/181 MS: 1 CopyPart- 00:07:34.652 [2024-12-09 15:40:29.854821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (35) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.652 [2024-12-09 15:40:29.854852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.911 #57 NEW cov: 12356 ft: 14181 corp: 10/931b lim: 320 exec/s: 0 rss: 74Mb L: 99/181 MS: 1 ChangeBinInt- 00:07:34.911 [2024-12-09 15:40:29.914970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (35) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.911 [2024-12-09 15:40:29.914995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.911 #58 NEW cov: 12356 ft: 14268 corp: 11/1031b lim: 320 exec/s: 0 rss: 74Mb L: 100/181 MS: 1 InsertByte- 00:07:34.911 [2024-12-09 15:40:29.975119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (35) qid:0 cid:4 nsid:a cdw10:fafafafa cdw11:00fafafa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.911 [2024-12-09 15:40:29.975144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.911 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:34.911 #60 NEW cov: 12379 ft: 14364 corp: 12/1099b lim: 320 exec/s: 0 rss: 74Mb L: 68/181 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:34.911 [2024-12-09 15:40:30.035278] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.911 [2024-12-09 15:40:30.035308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.911 #61 NEW cov: 12379 ft: 14388 corp: 13/1193b lim: 320 exec/s: 0 rss: 74Mb L: 94/181 MS: 1 EraseBytes- 00:07:34.911 [2024-12-09 15:40:30.075378] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.911 [2024-12-09 15:40:30.075408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.911 #62 NEW cov: 12379 ft: 14473 corp: 14/1303b lim: 320 exec/s: 0 rss: 74Mb L: 110/181 MS: 1 CrossOver- 00:07:34.911 [2024-12-09 15:40:30.135492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.911 [2024-12-09 15:40:30.135518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.170 #66 NEW cov: 12379 ft: 14489 corp: 15/1411b lim: 320 exec/s: 66 rss: 74Mb L: 108/181 MS: 4 ChangeByte-CrossOver-CopyPart-InsertRepeatedBytes- 00:07:35.170 [2024-12-09 15:40:30.176081] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT VECTOR CONFIGURATION cid:5 cdw10:09090909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.170 [2024-12-09 15:40:30.176108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.170 [2024-12-09 15:40:30.176162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.170 [2024-12-09 15:40:30.176177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.170 [2024-12-09 15:40:30.176227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:f5f5f5f5 cdw11:f5f5f5f5 00:07:35.170 [2024-12-09 15:40:30.176241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.170 NEW_FUNC[1/1]: 0x137fe68 in nvmf_ctrlr_get_features_interrupt_vector_configuration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1709 00:07:35.170 #67 NEW cov: 12412 ft: 14925 corp: 16/1703b lim: 320 exec/s: 67 rss: 74Mb L: 292/292 MS: 1 InsertRepeatedBytes- 00:07:35.170 [2024-12-09 15:40:30.245807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.170 [2024-12-09 15:40:30.245836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.170 #68 NEW cov: 12412 ft: 15003 corp: 17/1812b lim: 320 exec/s: 68 rss: 75Mb L: 109/292 MS: 1 InsertByte- 00:07:35.170 [2024-12-09 15:40:30.306192] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.170 [2024-12-09 15:40:30.306217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.170 [2024-12-09 15:40:30.306271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:ffff0000 cdw10:ffffffff cdw11:ffffffff 00:07:35.170 [2024-12-09 15:40:30.306285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.170 [2024-12-09 15:40:30.306359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:35.170 [2024-12-09 15:40:30.306373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.170 NEW_FUNC[1/1]: 0x1542098 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2221 00:07:35.170 #69 NEW cov: 12450 ft: 15120 corp: 18/2014b lim: 320 exec/s: 69 rss: 75Mb L: 202/292 MS: 1 InsertRepeatedBytes- 00:07:35.170 [2024-12-09 15:40:30.356586] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT VECTOR CONFIGURATION cid:5 cdw10:09090909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.170 [2024-12-09 15:40:30.356612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.170 [2024-12-09 15:40:30.356682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.170 [2024-12-09 15:40:30.356696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.170 [2024-12-09 15:40:30.356758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:f5f5f5f5 cdw11:f5f5f5f5 00:07:35.170 [2024-12-09 15:40:30.356775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.430 #70 NEW cov: 12450 ft: 15158 corp: 19/2306b lim: 320 exec/s: 70 rss: 75Mb L: 292/292 MS: 1 ShuffleBytes- 00:07:35.430 [2024-12-09 15:40:30.416273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.430 [2024-12-09 15:40:30.416299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.430 #71 NEW cov: 12450 ft: 15164 corp: 20/2400b lim: 320 exec/s: 71 rss: 75Mb L: 94/292 MS: 1 EraseBytes- 00:07:35.430 [2024-12-09 15:40:30.456865] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT VECTOR CONFIGURATION cid:5 cdw10:09090909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.430 [2024-12-09 15:40:30.456891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.430 [2024-12-09 15:40:30.456970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.430 [2024-12-09 15:40:30.456984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.430 [2024-12-09 15:40:30.457034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:01f5f5f5 cdw11:f51c0000 00:07:35.430 [2024-12-09 15:40:30.457049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.430 #72 NEW cov: 12450 ft: 15177 corp: 21/2696b lim: 320 exec/s: 72 rss: 75Mb L: 296/296 MS: 1 CMP- DE: "\001\000\000\034"- 00:07:35.430 [2024-12-09 15:40:30.496498] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.430 [2024-12-09 15:40:30.496524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.430 #73 NEW cov: 12450 ft: 15184 corp: 22/2809b lim: 320 exec/s: 73 rss: 75Mb L: 113/296 MS: 1 PersAutoDict- DE: "\001\000\000\034"- 00:07:35.430 [2024-12-09 15:40:30.537092] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT VECTOR CONFIGURATION cid:5 cdw10:09090909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.430 [2024-12-09 15:40:30.537118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.430 [2024-12-09 15:40:30.537184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.430 [2024-12-09 15:40:30.537199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.430 [2024-12-09 15:40:30.537251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:f5f5f5f5 cdw11:f5f5f5f5 00:07:35.430 [2024-12-09 15:40:30.537265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.430 #74 NEW cov: 12450 ft: 15193 corp: 23/3101b lim: 320 exec/s: 74 rss: 75Mb L: 292/296 MS: 1 ChangeBinInt- 00:07:35.430 [2024-12-09 15:40:30.577245] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT VECTOR CONFIGURATION cid:5 cdw10:09090909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.430 [2024-12-09 15:40:30.577271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.430 [2024-12-09 15:40:30.577338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.430 [2024-12-09 15:40:30.577352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.430 [2024-12-09 15:40:30.577406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:f5f5f5f5 cdw11:f5f5f5f5 00:07:35.430 [2024-12-09 15:40:30.577423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.430 #75 NEW cov: 12450 ft: 15202 corp: 24/3393b lim: 320 exec/s: 75 rss: 75Mb L: 292/296 MS: 1 ShuffleBytes- 00:07:35.430 [2024-12-09 15:40:30.636905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.430 [2024-12-09 15:40:30.636930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.689 #76 NEW cov: 12450 ft: 15241 corp: 25/3503b lim: 320 exec/s: 76 rss: 75Mb L: 110/296 MS: 1 InsertByte- 00:07:35.689 [2024-12-09 15:40:30.697082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.689 [2024-12-09 15:40:30.697106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.689 #77 NEW cov: 12450 ft: 15260 corp: 26/3612b lim: 320 exec/s: 77 rss: 75Mb L: 109/296 MS: 1 ChangeByte- 00:07:35.689 [2024-12-09 15:40:30.737629] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT VECTOR CONFIGURATION cid:5 cdw10:09090909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.689 [2024-12-09 15:40:30.737655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.689 [2024-12-09 15:40:30.737724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.689 [2024-12-09 15:40:30.737739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.689 [2024-12-09 15:40:30.737791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:f5f5f5f5 cdw11:f5f5f5f5 00:07:35.689 [2024-12-09 15:40:30.737804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.689 #78 NEW cov: 12450 ft: 15270 corp: 27/3904b lim: 320 exec/s: 78 rss: 75Mb L: 292/296 MS: 1 ChangeBinInt- 00:07:35.690 [2024-12-09 15:40:30.777741] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT VECTOR CONFIGURATION cid:5 cdw10:09090909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.690 [2024-12-09 15:40:30.777769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.690 [2024-12-09 15:40:30.777837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.690 [2024-12-09 15:40:30.777857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.690 [2024-12-09 15:40:30.777912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:f5f5f5f5 cdw11:f5f5f5f5 00:07:35.690 [2024-12-09 15:40:30.777926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.690 #79 NEW cov: 12450 ft: 15273 corp: 28/4196b lim: 320 exec/s: 79 rss: 75Mb L: 292/296 MS: 1 ChangeByte- 00:07:35.690 [2024-12-09 15:40:30.817874] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT VECTOR CONFIGURATION cid:5 cdw10:09090909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.690 [2024-12-09 15:40:30.817900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.690 [2024-12-09 15:40:30.817969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.690 [2024-12-09 15:40:30.817994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.690 [2024-12-09 15:40:30.818043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:f5f5f5f5 cdw11:f5f5f5f5 00:07:35.690 [2024-12-09 15:40:30.818057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.690 #80 NEW cov: 12450 ft: 15279 corp: 29/4488b lim: 320 exec/s: 80 rss: 75Mb L: 292/296 MS: 1 ChangeBinInt- 00:07:35.690 [2024-12-09 15:40:30.858003] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT VECTOR CONFIGURATION cid:5 cdw10:09090909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.690 [2024-12-09 15:40:30.858028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.690 [2024-12-09 15:40:30.858078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:bb000000 00:07:35.690 [2024-12-09 15:40:30.858092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.690 [2024-12-09 15:40:30.858161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:f5f5f5f5 cdw11:f5f5f5f5 00:07:35.690 [2024-12-09 15:40:30.858175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.690 #81 NEW cov: 12450 ft: 15298 corp: 30/4792b lim: 320 exec/s: 81 rss: 75Mb L: 304/304 MS: 1 InsertRepeatedBytes- 00:07:35.949 [2024-12-09 15:40:30.917680] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.949 [2024-12-09 15:40:30.917706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.949 #82 NEW cov: 12450 ft: 15321 corp: 31/4860b lim: 320 exec/s: 82 rss: 75Mb L: 68/304 MS: 1 EraseBytes- 00:07:35.949 [2024-12-09 15:40:30.978199] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT VECTOR CONFIGURATION cid:5 cdw10:09090909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.949 [2024-12-09 15:40:30.978224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.949 [2024-12-09 15:40:30.978278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.949 [2024-12-09 15:40:30.978292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.949 #83 NEW cov: 12450 ft: 15330 corp: 32/5074b lim: 320 exec/s: 83 rss: 75Mb L: 214/304 MS: 1 EraseBytes- 00:07:35.949 [2024-12-09 15:40:31.018105] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.949 [2024-12-09 15:40:31.018130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.949 [2024-12-09 15:40:31.018183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.949 [2024-12-09 15:40:31.018197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.949 #84 NEW cov: 12450 ft: 15347 corp: 33/5260b lim: 320 exec/s: 84 rss: 75Mb L: 186/304 MS: 1 CrossOver- 00:07:35.949 [2024-12-09 15:40:31.058100] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.950 [2024-12-09 15:40:31.058124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.950 #85 NEW cov: 12450 ft: 15371 corp: 34/5373b lim: 320 exec/s: 85 rss: 75Mb L: 113/304 MS: 1 ShuffleBytes- 00:07:35.950 [2024-12-09 15:40:31.118224] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.950 [2024-12-09 15:40:31.118248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.950 #86 NEW cov: 12450 ft: 15386 corp: 35/5486b lim: 320 exec/s: 43 rss: 75Mb L: 113/304 MS: 1 ChangeBit- 00:07:35.950 #86 DONE cov: 12450 ft: 15386 corp: 35/5486b lim: 320 exec/s: 43 rss: 75Mb 00:07:35.950 ###### Recommended dictionary. ###### 00:07:35.950 "\001\000\000\034" # Uses: 1 00:07:35.950 ###### End of recommended dictionary. ###### 00:07:35.950 Done 86 runs in 2 second(s) 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:36.209 15:40:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:07:36.209 [2024-12-09 15:40:31.317137] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:36.209 [2024-12-09 15:40:31.317220] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905375 ] 00:07:36.468 [2024-12-09 15:40:31.581180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.468 [2024-12-09 15:40:31.629325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.468 [2024-12-09 15:40:31.688555] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.727 [2024-12-09 15:40:31.704688] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:07:36.727 INFO: Running with entropic power schedule (0xFF, 100). 00:07:36.728 INFO: Seed: 2316660924 00:07:36.728 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:07:36.728 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:07:36.728 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:36.728 INFO: A corpus is not provided, starting from an empty corpus 00:07:36.728 #2 INITED exec/s: 0 rss: 66Mb 00:07:36.728 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:36.728 This may also happen if the target rejected all inputs we tried so far 00:07:36.728 [2024-12-09 15:40:31.753186] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:36.728 [2024-12-09 15:40:31.753301] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:36.728 [2024-12-09 15:40:31.753408] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0a 00:07:36.728 [2024-12-09 15:40:31.753604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.728 [2024-12-09 15:40:31.753636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.728 [2024-12-09 15:40:31.753688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.728 [2024-12-09 15:40:31.753702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.728 [2024-12-09 15:40:31.753753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.728 [2024-12-09 15:40:31.753767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.988 NEW_FUNC[1/717]: 0x43c4e8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:07:36.988 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:36.988 #3 NEW cov: 12198 ft: 12170 corp: 2/19b lim: 30 exec/s: 0 rss: 73Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:07:36.988 [2024-12-09 15:40:32.084011] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:36.988 [2024-12-09 15:40:32.084156] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:36.988 [2024-12-09 15:40:32.084366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.988 [2024-12-09 15:40:32.084400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.988 [2024-12-09 15:40:32.084455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.988 [2024-12-09 15:40:32.084470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.988 #12 NEW cov: 12334 ft: 13085 corp: 3/34b lim: 30 exec/s: 0 rss: 74Mb L: 15/18 MS: 4 ChangeBit-CopyPart-ChangeByte-InsertRepeatedBytes- 00:07:36.988 [2024-12-09 15:40:32.124072] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:36.988 [2024-12-09 15:40:32.124182] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (27756) > len (4) 00:07:36.988 [2024-12-09 15:40:32.124292] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:36.988 [2024-12-09 15:40:32.124510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.988 [2024-12-09 15:40:32.124537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.988 [2024-12-09 15:40:32.124592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.988 [2024-12-09 15:40:32.124606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.988 [2024-12-09 15:40:32.124658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.988 [2024-12-09 15:40:32.124672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.988 #13 NEW cov: 12353 ft: 13379 corp: 4/55b lim: 30 exec/s: 0 rss: 74Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:07:36.988 [2024-12-09 15:40:32.184286] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:36.988 [2024-12-09 15:40:32.184403] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (27756) > len (4) 00:07:36.988 [2024-12-09 15:40:32.184508] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:36.988 [2024-12-09 15:40:32.184712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.988 [2024-12-09 15:40:32.184739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.988 [2024-12-09 15:40:32.184793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.988 [2024-12-09 15:40:32.184808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.988 [2024-12-09 15:40:32.184863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.988 [2024-12-09 15:40:32.184877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.247 #14 NEW cov: 12438 ft: 13658 corp: 5/76b lim: 30 exec/s: 0 rss: 74Mb L: 21/21 MS: 1 ShuffleBytes- 00:07:37.247 [2024-12-09 15:40:32.244392] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (897460) > buf size (4096) 00:07:37.247 [2024-12-09 15:40:32.244503] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (27756) > len (4) 00:07:37.247 [2024-12-09 15:40:32.244610] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:37.247 [2024-12-09 15:40:32.244831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c8300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.247 [2024-12-09 15:40:32.244861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.247 [2024-12-09 15:40:32.244914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.247 [2024-12-09 15:40:32.244928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.247 [2024-12-09 15:40:32.244983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.247 [2024-12-09 15:40:32.244996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.247 #15 NEW cov: 12438 ft: 13756 corp: 6/97b lim: 30 exec/s: 0 rss: 74Mb L: 21/21 MS: 1 CMP- DE: "\000\007"- 00:07:37.247 [2024-12-09 15:40:32.284459] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (897460) > buf size (4096) 00:07:37.247 [2024-12-09 15:40:32.284683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c8300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.247 [2024-12-09 15:40:32.284709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.248 #16 NEW cov: 12438 ft: 14170 corp: 7/108b lim: 30 exec/s: 0 rss: 74Mb L: 11/21 MS: 1 EraseBytes- 00:07:37.248 [2024-12-09 15:40:32.344622] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:37.248 [2024-12-09 15:40:32.344851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.248 [2024-12-09 15:40:32.344876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.248 #18 NEW cov: 12438 ft: 14241 corp: 8/117b lim: 30 exec/s: 0 rss: 74Mb L: 9/21 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:37.248 [2024-12-09 15:40:32.384716] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:37.248 [2024-12-09 15:40:32.384926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.248 [2024-12-09 15:40:32.384951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.248 #24 NEW cov: 12438 ft: 14362 corp: 9/126b lim: 30 exec/s: 0 rss: 74Mb L: 9/21 MS: 1 ChangeByte- 00:07:37.248 [2024-12-09 15:40:32.444994] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:37.248 [2024-12-09 15:40:32.445109] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000e1e1 00:07:37.248 [2024-12-09 15:40:32.445216] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000e1e1 00:07:37.248 [2024-12-09 15:40:32.445322] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000e1e1 00:07:37.248 [2024-12-09 15:40:32.445536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.248 [2024-12-09 15:40:32.445562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.248 [2024-12-09 15:40:32.445614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff818a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.248 [2024-12-09 15:40:32.445629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.248 [2024-12-09 15:40:32.445682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e1e181e1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.248 [2024-12-09 15:40:32.445695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.248 [2024-12-09 15:40:32.445747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e1e181e1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.248 [2024-12-09 15:40:32.445761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.508 #25 NEW cov: 12438 ft: 14864 corp: 10/153b lim: 30 exec/s: 0 rss: 74Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:07:37.508 [2024-12-09 15:40:32.505123] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:37.508 [2024-12-09 15:40:32.505251] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000bfff 00:07:37.508 [2024-12-09 15:40:32.505353] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0a 00:07:37.508 [2024-12-09 15:40:32.505568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.508 [2024-12-09 15:40:32.505594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.508 [2024-12-09 15:40:32.505651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.508 [2024-12-09 15:40:32.505665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.508 [2024-12-09 15:40:32.505720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.508 [2024-12-09 15:40:32.505734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.508 #26 NEW cov: 12438 ft: 14894 corp: 11/171b lim: 30 exec/s: 0 rss: 74Mb L: 18/27 MS: 1 ChangeBit- 00:07:37.508 [2024-12-09 15:40:32.565298] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (897460) > buf size (4096) 00:07:37.508 [2024-12-09 15:40:32.565432] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786436) > buf size (4096) 00:07:37.508 [2024-12-09 15:40:32.565539] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:37.508 [2024-12-09 15:40:32.565749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c8300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.508 [2024-12-09 15:40:32.565774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.508 [2024-12-09 15:40:32.565828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000836c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.508 [2024-12-09 15:40:32.565847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.508 [2024-12-09 15:40:32.565869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.508 [2024-12-09 15:40:32.565883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.508 #27 NEW cov: 12438 ft: 14932 corp: 12/192b lim: 30 exec/s: 0 rss: 74Mb L: 21/27 MS: 1 ChangeBinInt- 00:07:37.508 [2024-12-09 15:40:32.605355] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (112052) > buf size (4096) 00:07:37.508 [2024-12-09 15:40:32.605486] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (27756) > len (4) 00:07:37.508 [2024-12-09 15:40:32.605590] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:37.508 [2024-12-09 15:40:32.605796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6d6c0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.509 [2024-12-09 15:40:32.605823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.509 [2024-12-09 15:40:32.605879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.509 [2024-12-09 15:40:32.605894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.509 [2024-12-09 15:40:32.605948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.509 [2024-12-09 15:40:32.605961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.509 #28 NEW cov: 12438 ft: 14970 corp: 13/213b lim: 30 exec/s: 0 rss: 74Mb L: 21/27 MS: 1 ChangeBit- 00:07:37.509 [2024-12-09 15:40:32.645486] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:37.509 [2024-12-09 15:40:32.645618] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (27756) > len (4) 00:07:37.509 [2024-12-09 15:40:32.645723] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:37.509 [2024-12-09 15:40:32.645939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.509 [2024-12-09 15:40:32.645965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.509 [2024-12-09 15:40:32.646022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.509 [2024-12-09 15:40:32.646036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.509 [2024-12-09 15:40:32.646091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.509 [2024-12-09 15:40:32.646108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.509 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:37.509 #29 NEW cov: 12461 ft: 15024 corp: 14/235b lim: 30 exec/s: 0 rss: 74Mb L: 22/27 MS: 1 InsertByte- 00:07:37.509 [2024-12-09 15:40:32.685600] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:37.509 [2024-12-09 15:40:32.685714] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:37.509 [2024-12-09 15:40:32.685815] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:37.509 [2024-12-09 15:40:32.686050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.509 [2024-12-09 15:40:32.686076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.509 [2024-12-09 15:40:32.686132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.509 [2024-12-09 15:40:32.686147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.509 [2024-12-09 15:40:32.686201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ff2f83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.509 [2024-12-09 15:40:32.686214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.509 #30 NEW cov: 12461 ft: 15079 corp: 15/254b lim: 30 exec/s: 0 rss: 74Mb L: 19/27 MS: 1 InsertByte- 00:07:37.509 [2024-12-09 15:40:32.725694] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000007 00:07:37.509 [2024-12-09 15:40:32.725824] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (27756) > len (4) 00:07:37.509 [2024-12-09 15:40:32.725936] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:37.509 [2024-12-09 15:40:32.726142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c8300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.509 [2024-12-09 15:40:32.726169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.509 [2024-12-09 15:40:32.726224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.509 [2024-12-09 15:40:32.726239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.509 [2024-12-09 15:40:32.726290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.509 [2024-12-09 15:40:32.726304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.769 #31 NEW cov: 12461 ft: 15091 corp: 16/277b lim: 30 exec/s: 31 rss: 74Mb L: 23/27 MS: 1 PersAutoDict- DE: "\000\007"- 00:07:37.769 [2024-12-09 15:40:32.765825] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (112052) > buf size (4096) 00:07:37.769 [2024-12-09 15:40:32.765962] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (27756) > len (4) 00:07:37.769 [2024-12-09 15:40:32.766069] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (27756) > len (32) 00:07:37.769 [2024-12-09 15:40:32.766287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6d6c0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.769 [2024-12-09 15:40:32.766313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.769 [2024-12-09 15:40:32.766372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.769 [2024-12-09 15:40:32.766387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.769 [2024-12-09 15:40:32.766441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0007006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.769 [2024-12-09 15:40:32.766455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.769 #32 NEW cov: 12461 ft: 15125 corp: 17/298b lim: 30 exec/s: 32 rss: 74Mb L: 21/27 MS: 1 PersAutoDict- DE: "\000\007"- 00:07:37.769 [2024-12-09 15:40:32.826076] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:37.769 [2024-12-09 15:40:32.826285] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (27756) > len (4) 00:07:37.769 [2024-12-09 15:40:32.826389] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:37.769 [2024-12-09 15:40:32.826601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.769 [2024-12-09 15:40:32.826627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.769 [2024-12-09 15:40:32.826684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.769 [2024-12-09 15:40:32.826699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.769 [2024-12-09 15:40:32.826754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.769 [2024-12-09 15:40:32.826768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.769 [2024-12-09 15:40:32.826821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.769 [2024-12-09 15:40:32.826835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.769 #33 NEW cov: 12471 ft: 15161 corp: 18/324b lim: 30 exec/s: 33 rss: 74Mb L: 26/27 MS: 1 InsertRepeatedBytes- 00:07:37.769 [2024-12-09 15:40:32.866037] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:37.769 [2024-12-09 15:40:32.866243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.769 [2024-12-09 15:40:32.866273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.769 #34 NEW cov: 12471 ft: 15187 corp: 19/335b lim: 30 exec/s: 34 rss: 74Mb L: 11/27 MS: 1 PersAutoDict- DE: "\000\007"- 00:07:37.769 [2024-12-09 15:40:32.906195] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:37.769 [2024-12-09 15:40:32.906322] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (27756) > len (4) 00:07:37.769 [2024-12-09 15:40:32.906426] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:07:37.769 [2024-12-09 15:40:32.906633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.769 [2024-12-09 15:40:32.906660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.769 [2024-12-09 15:40:32.906717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.769 [2024-12-09 15:40:32.906735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.769 [2024-12-09 15:40:32.906791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.769 [2024-12-09 15:40:32.906805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.769 #35 NEW cov: 12471 ft: 15192 corp: 20/356b lim: 30 exec/s: 35 rss: 74Mb L: 21/27 MS: 1 CMP- DE: "\377\377\001\000"- 00:07:37.769 [2024-12-09 15:40:32.946378] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000007 00:07:37.769 [2024-12-09 15:40:32.946490] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:37.769 [2024-12-09 15:40:32.946598] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (27756) > len (4) 00:07:37.769 [2024-12-09 15:40:32.946699] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:37.769 [2024-12-09 15:40:32.946940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c8300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.769 [2024-12-09 15:40:32.946966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.769 [2024-12-09 15:40:32.947023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.769 [2024-12-09 15:40:32.947038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.769 [2024-12-09 15:40:32.947091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0000006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.769 [2024-12-09 15:40:32.947106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.769 [2024-12-09 15:40:32.947162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.769 [2024-12-09 15:40:32.947175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.769 #36 NEW cov: 12471 ft: 15216 corp: 21/383b lim: 30 exec/s: 36 rss: 74Mb L: 27/27 MS: 1 PersAutoDict- DE: "\377\377\001\000"- 00:07:38.029 [2024-12-09 15:40:33.006503] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.029 [2024-12-09 15:40:33.006635] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (27756) > len (4) 00:07:38.029 [2024-12-09 15:40:33.006742] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.029 [2024-12-09 15:40:33.006960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.029 [2024-12-09 15:40:33.006997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.029 [2024-12-09 15:40:33.007052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.029 [2024-12-09 15:40:33.007067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.029 [2024-12-09 15:40:33.007120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.029 [2024-12-09 15:40:33.007135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.029 #37 NEW cov: 12471 ft: 15251 corp: 22/405b lim: 30 exec/s: 37 rss: 75Mb L: 22/27 MS: 1 ShuffleBytes- 00:07:38.029 [2024-12-09 15:40:33.066684] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (112052) > buf size (4096) 00:07:38.029 [2024-12-09 15:40:33.066795] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (27756) > len (4) 00:07:38.029 [2024-12-09 15:40:33.066904] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.029 [2024-12-09 15:40:33.067100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6d6c0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.029 [2024-12-09 15:40:33.067127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.029 [2024-12-09 15:40:33.067181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.029 [2024-12-09 15:40:33.067195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.029 [2024-12-09 15:40:33.067250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.029 [2024-12-09 15:40:33.067265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.029 #38 NEW cov: 12471 ft: 15265 corp: 23/426b lim: 30 exec/s: 38 rss: 75Mb L: 21/27 MS: 1 ChangeByte- 00:07:38.029 [2024-12-09 15:40:33.106780] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.029 [2024-12-09 15:40:33.106916] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x16 00:07:38.029 [2024-12-09 15:40:33.107025] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.029 [2024-12-09 15:40:33.107228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.029 [2024-12-09 15:40:33.107254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.029 [2024-12-09 15:40:33.107311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.029 [2024-12-09 15:40:33.107326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.029 [2024-12-09 15:40:33.107379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.029 [2024-12-09 15:40:33.107393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.029 #39 NEW cov: 12471 ft: 15314 corp: 24/448b lim: 30 exec/s: 39 rss: 75Mb L: 22/27 MS: 1 ChangeBinInt- 00:07:38.029 [2024-12-09 15:40:33.166906] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:38.029 [2024-12-09 15:40:33.167019] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:38.029 [2024-12-09 15:40:33.167236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.029 [2024-12-09 15:40:33.167261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.029 [2024-12-09 15:40:33.167318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.030 [2024-12-09 15:40:33.167333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.030 #45 NEW cov: 12471 ft: 15350 corp: 25/461b lim: 30 exec/s: 45 rss: 75Mb L: 13/27 MS: 1 PersAutoDict- DE: "\377\377\001\000"- 00:07:38.030 [2024-12-09 15:40:33.207072] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.030 [2024-12-09 15:40:33.207186] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (7172) > buf size (4096) 00:07:38.030 [2024-12-09 15:40:33.207292] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (27756) > len (4) 00:07:38.030 [2024-12-09 15:40:33.207397] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.030 [2024-12-09 15:40:33.207600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.030 [2024-12-09 15:40:33.207626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.030 [2024-12-09 15:40:33.207680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:07000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.030 [2024-12-09 15:40:33.207694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.030 [2024-12-09 15:40:33.207747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0000006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.030 [2024-12-09 15:40:33.207761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.030 [2024-12-09 15:40:33.207814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.030 [2024-12-09 15:40:33.207828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.030 #46 NEW cov: 12471 ft: 15359 corp: 26/490b lim: 30 exec/s: 46 rss: 75Mb L: 29/29 MS: 1 CrossOver- 00:07:38.030 [2024-12-09 15:40:33.247209] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.030 [2024-12-09 15:40:33.247339] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (7172) > buf size (4096) 00:07:38.030 [2024-12-09 15:40:33.247446] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (27756) > len (4) 00:07:38.030 [2024-12-09 15:40:33.247550] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.030 [2024-12-09 15:40:33.247759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.030 [2024-12-09 15:40:33.247785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.030 [2024-12-09 15:40:33.247840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:07000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.030 [2024-12-09 15:40:33.247860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.030 [2024-12-09 15:40:33.247918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0000006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.030 [2024-12-09 15:40:33.247932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.030 [2024-12-09 15:40:33.247987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.030 [2024-12-09 15:40:33.248001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.289 #47 NEW cov: 12471 ft: 15394 corp: 27/519b lim: 30 exec/s: 47 rss: 75Mb L: 29/29 MS: 1 ChangeBit- 00:07:38.289 [2024-12-09 15:40:33.307311] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.289 [2024-12-09 15:40:33.307436] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.289 [2024-12-09 15:40:33.307650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.289 [2024-12-09 15:40:33.307675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.290 [2024-12-09 15:40:33.307734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.290 [2024-12-09 15:40:33.307749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.290 #48 NEW cov: 12471 ft: 15470 corp: 28/533b lim: 30 exec/s: 48 rss: 75Mb L: 14/29 MS: 1 EraseBytes- 00:07:38.290 [2024-12-09 15:40:33.347458] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.290 [2024-12-09 15:40:33.347586] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (27756) > len (4) 00:07:38.290 [2024-12-09 15:40:33.347693] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:07:38.290 [2024-12-09 15:40:33.347906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.290 [2024-12-09 15:40:33.347932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.290 [2024-12-09 15:40:33.347988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.290 [2024-12-09 15:40:33.348003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.290 [2024-12-09 15:40:33.348060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.290 [2024-12-09 15:40:33.348074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.290 #49 NEW cov: 12471 ft: 15489 corp: 29/554b lim: 30 exec/s: 49 rss: 75Mb L: 21/29 MS: 1 ShuffleBytes- 00:07:38.290 [2024-12-09 15:40:33.407670] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:38.290 [2024-12-09 15:40:33.407785] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000e1e1 00:07:38.290 [2024-12-09 15:40:33.407898] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000e1e1 00:07:38.290 [2024-12-09 15:40:33.407999] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000e1e1 00:07:38.290 [2024-12-09 15:40:33.408217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff8300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.290 [2024-12-09 15:40:33.408243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.290 [2024-12-09 15:40:33.408301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff818a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.290 [2024-12-09 15:40:33.408315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.290 [2024-12-09 15:40:33.408371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e1e181e1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.290 [2024-12-09 15:40:33.408384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.290 [2024-12-09 15:40:33.408436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e1e181e1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.290 [2024-12-09 15:40:33.408450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.290 #50 NEW cov: 12471 ft: 15503 corp: 30/581b lim: 30 exec/s: 50 rss: 75Mb L: 27/29 MS: 1 PersAutoDict- DE: "\000\007"- 00:07:38.290 [2024-12-09 15:40:33.467821] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:38.290 [2024-12-09 15:40:33.467959] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000e1e1 00:07:38.290 [2024-12-09 15:40:33.468071] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000e1e1 00:07:38.290 [2024-12-09 15:40:33.468178] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000e1e1 00:07:38.290 [2024-12-09 15:40:33.468380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff8300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.290 [2024-12-09 15:40:33.468406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.290 [2024-12-09 15:40:33.468459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff818a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.290 [2024-12-09 15:40:33.468474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.290 [2024-12-09 15:40:33.468526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e1e181e1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.290 [2024-12-09 15:40:33.468540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.290 [2024-12-09 15:40:33.468593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e1e181e1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.290 [2024-12-09 15:40:33.468607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.290 #51 NEW cov: 12471 ft: 15520 corp: 31/608b lim: 30 exec/s: 51 rss: 75Mb L: 27/29 MS: 1 ChangeBit- 00:07:38.550 [2024-12-09 15:40:33.527945] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.550 [2024-12-09 15:40:33.528074] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (27756) > len (4) 00:07:38.550 [2024-12-09 15:40:33.528184] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:38.550 [2024-12-09 15:40:33.528390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6c6c0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.550 [2024-12-09 15:40:33.528417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.550 [2024-12-09 15:40:33.528471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.550 [2024-12-09 15:40:33.528485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.550 [2024-12-09 15:40:33.528537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.550 [2024-12-09 15:40:33.528551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.550 #52 NEW cov: 12471 ft: 15530 corp: 32/629b lim: 30 exec/s: 52 rss: 75Mb L: 21/29 MS: 1 CrossOver- 00:07:38.550 [2024-12-09 15:40:33.588211] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:07:38.550 [2024-12-09 15:40:33.588341] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (215884) > buf size (4096) 00:07:38.550 [2024-12-09 15:40:33.588449] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (108) > len (4) 00:07:38.550 [2024-12-09 15:40:33.588556] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.550 [2024-12-09 15:40:33.588669] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.550 [2024-12-09 15:40:33.588874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.550 [2024-12-09 15:40:33.588900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.550 [2024-12-09 15:40:33.588963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d2d200d2 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.550 [2024-12-09 15:40:33.588977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.550 [2024-12-09 15:40:33.589029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.550 [2024-12-09 15:40:33.589042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.550 [2024-12-09 15:40:33.589092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.550 [2024-12-09 15:40:33.589123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.550 [2024-12-09 15:40:33.589173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.550 [2024-12-09 15:40:33.589187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.550 #53 NEW cov: 12471 ft: 15667 corp: 33/659b lim: 30 exec/s: 53 rss: 75Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:38.550 [2024-12-09 15:40:33.628194] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:38.550 [2024-12-09 15:40:33.628420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:bfff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.550 [2024-12-09 15:40:33.628447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.550 #54 NEW cov: 12471 ft: 15722 corp: 34/668b lim: 30 exec/s: 54 rss: 75Mb L: 9/30 MS: 1 ShuffleBytes- 00:07:38.550 [2024-12-09 15:40:33.668419] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:07:38.550 [2024-12-09 15:40:33.668550] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (215884) > buf size (4096) 00:07:38.551 [2024-12-09 15:40:33.668661] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (108) > len (4) 00:07:38.551 [2024-12-09 15:40:33.668769] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.551 [2024-12-09 15:40:33.668890] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.551 [2024-12-09 15:40:33.669107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d23f02d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.551 [2024-12-09 15:40:33.669132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.551 [2024-12-09 15:40:33.669186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d2d200d2 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.551 [2024-12-09 15:40:33.669200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.551 [2024-12-09 15:40:33.669253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.551 [2024-12-09 15:40:33.669267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.551 [2024-12-09 15:40:33.669321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.551 [2024-12-09 15:40:33.669335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.551 [2024-12-09 15:40:33.669385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.551 [2024-12-09 15:40:33.669399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.551 #55 NEW cov: 12471 ft: 15776 corp: 35/698b lim: 30 exec/s: 55 rss: 75Mb L: 30/30 MS: 1 ChangeByte- 00:07:38.551 [2024-12-09 15:40:33.728595] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:07:38.551 [2024-12-09 15:40:33.728727] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (215884) > buf size (4096) 00:07:38.551 [2024-12-09 15:40:33.728839] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (108) > len (4) 00:07:38.551 [2024-12-09 15:40:33.728959] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (110596) > buf size (4096) 00:07:38.551 [2024-12-09 15:40:33.729065] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111028) > buf size (4096) 00:07:38.551 [2024-12-09 15:40:33.729278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.551 [2024-12-09 15:40:33.729303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.551 [2024-12-09 15:40:33.729358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d2d200d2 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.551 [2024-12-09 15:40:33.729372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.551 [2024-12-09 15:40:33.729426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.551 [2024-12-09 15:40:33.729439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.551 [2024-12-09 15:40:33.729492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:6c000007 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.551 [2024-12-09 15:40:33.729507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.551 [2024-12-09 15:40:33.729558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:6c6c006c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.551 [2024-12-09 15:40:33.729572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.551 #56 NEW cov: 12471 ft: 15790 corp: 36/728b lim: 30 exec/s: 28 rss: 75Mb L: 30/30 MS: 1 PersAutoDict- DE: "\000\007"- 00:07:38.551 #56 DONE cov: 12471 ft: 15790 corp: 36/728b lim: 30 exec/s: 28 rss: 75Mb 00:07:38.551 ###### Recommended dictionary. ###### 00:07:38.551 "\000\007" # Uses: 5 00:07:38.551 "\377\377\001\000" # Uses: 2 00:07:38.551 ###### End of recommended dictionary. ###### 00:07:38.551 Done 56 runs in 2 second(s) 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:38.811 15:40:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:07:38.811 [2024-12-09 15:40:33.905218] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:38.811 [2024-12-09 15:40:33.905290] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905733 ] 00:07:39.070 [2024-12-09 15:40:34.172814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.070 [2024-12-09 15:40:34.221998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.070 [2024-12-09 15:40:34.281138] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.329 [2024-12-09 15:40:34.297295] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:39.329 INFO: Running with entropic power schedule (0xFF, 100). 00:07:39.329 INFO: Seed: 614695630 00:07:39.329 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:07:39.329 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:07:39.329 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:39.329 INFO: A corpus is not provided, starting from an empty corpus 00:07:39.329 #2 INITED exec/s: 0 rss: 66Mb 00:07:39.329 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:39.329 This may also happen if the target rejected all inputs we tried so far 00:07:39.329 [2024-12-09 15:40:34.367868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.329 [2024-12-09 15:40:34.367910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.329 [2024-12-09 15:40:34.368051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.329 [2024-12-09 15:40:34.368073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.589 NEW_FUNC[1/716]: 0x43ef98 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:39.589 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:39.589 #17 NEW cov: 12154 ft: 12148 corp: 2/20b lim: 35 exec/s: 0 rss: 73Mb L: 19/19 MS: 5 ChangeBinInt-ChangeBit-ChangeByte-ChangeBit-InsertRepeatedBytes- 00:07:39.589 [2024-12-09 15:40:34.709303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4747000a cdw11:47004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.589 [2024-12-09 15:40:34.709345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.589 [2024-12-09 15:40:34.709439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:47470047 cdw11:47004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.589 [2024-12-09 15:40:34.709457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.589 [2024-12-09 15:40:34.709548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:47470047 cdw11:47004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.589 [2024-12-09 15:40:34.709564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.589 #29 NEW cov: 12267 ft: 13036 corp: 3/47b lim: 35 exec/s: 0 rss: 73Mb L: 27/27 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:39.589 [2024-12-09 15:40:34.759562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4747000a cdw11:47004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.589 [2024-12-09 15:40:34.759589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.589 [2024-12-09 15:40:34.759677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:47470047 cdw11:50004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.589 [2024-12-09 15:40:34.759694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.589 [2024-12-09 15:40:34.759784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:47470047 cdw11:47004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.589 [2024-12-09 15:40:34.759801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.589 #30 NEW cov: 12273 ft: 13244 corp: 4/74b lim: 35 exec/s: 0 rss: 73Mb L: 27/27 MS: 1 ChangeBinInt- 00:07:39.849 [2024-12-09 15:40:34.829609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.849 [2024-12-09 15:40:34.829637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.849 [2024-12-09 15:40:34.829735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.849 [2024-12-09 15:40:34.829753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.849 #31 NEW cov: 12358 ft: 13541 corp: 5/93b lim: 35 exec/s: 0 rss: 73Mb L: 19/27 MS: 1 ShuffleBytes- 00:07:39.849 [2024-12-09 15:40:34.900490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.849 [2024-12-09 15:40:34.900518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.849 [2024-12-09 15:40:34.900606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:08080008 cdw11:16000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.849 [2024-12-09 15:40:34.900625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.849 [2024-12-09 15:40:34.900722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:16160016 cdw11:16001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.849 [2024-12-09 15:40:34.900740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.849 [2024-12-09 15:40:34.900848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:08080016 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.849 [2024-12-09 15:40:34.900866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.849 #32 NEW cov: 12358 ft: 14081 corp: 6/122b lim: 35 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:07:39.849 [2024-12-09 15:40:34.950263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.849 [2024-12-09 15:40:34.950289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.849 [2024-12-09 15:40:34.950377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.849 [2024-12-09 15:40:34.950394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.849 #33 NEW cov: 12358 ft: 14125 corp: 7/138b lim: 35 exec/s: 0 rss: 73Mb L: 16/29 MS: 1 EraseBytes- 00:07:39.849 [2024-12-09 15:40:35.000979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.849 [2024-12-09 15:40:35.001006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.849 [2024-12-09 15:40:35.001108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:16160008 cdw11:16001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.849 [2024-12-09 15:40:35.001125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.849 [2024-12-09 15:40:35.001215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:16160016 cdw11:08001608 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.849 [2024-12-09 15:40:35.001231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.849 #34 NEW cov: 12358 ft: 14170 corp: 8/163b lim: 35 exec/s: 0 rss: 73Mb L: 25/29 MS: 1 EraseBytes- 00:07:39.849 [2024-12-09 15:40:35.071500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.849 [2024-12-09 15:40:35.071527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.849 [2024-12-09 15:40:35.071620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:08080008 cdw11:16000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.849 [2024-12-09 15:40:35.071636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.849 [2024-12-09 15:40:35.071728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:16160016 cdw11:08001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.849 [2024-12-09 15:40:35.071745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.849 [2024-12-09 15:40:35.071830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:16160008 cdw11:16001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.849 [2024-12-09 15:40:35.071849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.107 #35 NEW cov: 12358 ft: 14194 corp: 9/197b lim: 35 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 CopyPart- 00:07:40.107 [2024-12-09 15:40:35.121144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.107 [2024-12-09 15:40:35.121174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.107 [2024-12-09 15:40:35.121267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:08080008 cdw11:08000800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.107 [2024-12-09 15:40:35.121283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.107 #36 NEW cov: 12358 ft: 14201 corp: 10/216b lim: 35 exec/s: 0 rss: 73Mb L: 19/34 MS: 1 ChangeBit- 00:07:40.107 [2024-12-09 15:40:35.171623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4747000a cdw11:47001000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.107 [2024-12-09 15:40:35.171652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.107 [2024-12-09 15:40:35.171757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:47470047 cdw11:47004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.107 [2024-12-09 15:40:35.171774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.107 [2024-12-09 15:40:35.171874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:47470047 cdw11:47004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.107 [2024-12-09 15:40:35.171890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.107 #37 NEW cov: 12358 ft: 14232 corp: 11/243b lim: 35 exec/s: 0 rss: 73Mb L: 27/34 MS: 1 CMP- DE: "\020\000"- 00:07:40.107 [2024-12-09 15:40:35.222204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.107 [2024-12-09 15:40:35.222231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.107 [2024-12-09 15:40:35.222327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:16160008 cdw11:16001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.107 [2024-12-09 15:40:35.222343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.107 [2024-12-09 15:40:35.222429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:16ff0016 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.107 [2024-12-09 15:40:35.222447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.107 [2024-12-09 15:40:35.222538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:08080016 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.107 [2024-12-09 15:40:35.222556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.107 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:40.107 #38 NEW cov: 12381 ft: 14306 corp: 12/272b lim: 35 exec/s: 0 rss: 74Mb L: 29/34 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:40.107 [2024-12-09 15:40:35.292393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08160008 cdw11:16001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.108 [2024-12-09 15:40:35.292421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.108 [2024-12-09 15:40:35.292522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.108 [2024-12-09 15:40:35.292538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.108 [2024-12-09 15:40:35.292626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:16160016 cdw11:16001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.108 [2024-12-09 15:40:35.292646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.108 [2024-12-09 15:40:35.292750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:16160016 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.108 [2024-12-09 15:40:35.292767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.108 #39 NEW cov: 12381 ft: 14400 corp: 13/303b lim: 35 exec/s: 0 rss: 74Mb L: 31/34 MS: 1 CrossOver- 00:07:40.366 [2024-12-09 15:40:35.341977] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:40.366 [2024-12-09 15:40:35.342267] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:40.366 [2024-12-09 15:40:35.342746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.366 [2024-12-09 15:40:35.342775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.366 [2024-12-09 15:40:35.342859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.366 [2024-12-09 15:40:35.342880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.366 [2024-12-09 15:40:35.342969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.366 [2024-12-09 15:40:35.342990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.366 #45 NEW cov: 12392 ft: 14494 corp: 14/328b lim: 35 exec/s: 45 rss: 74Mb L: 25/34 MS: 1 InsertRepeatedBytes- 00:07:40.366 [2024-12-09 15:40:35.393148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.366 [2024-12-09 15:40:35.393175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.366 [2024-12-09 15:40:35.393260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:08080008 cdw11:16000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.366 [2024-12-09 15:40:35.393277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.366 [2024-12-09 15:40:35.393360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:16160016 cdw11:3f001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.366 [2024-12-09 15:40:35.393376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.366 [2024-12-09 15:40:35.393461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:08080016 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.366 [2024-12-09 15:40:35.393477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.366 #46 NEW cov: 12392 ft: 14570 corp: 15/357b lim: 35 exec/s: 46 rss: 74Mb L: 29/34 MS: 1 ChangeByte- 00:07:40.366 [2024-12-09 15:40:35.443550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.366 [2024-12-09 15:40:35.443577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.366 [2024-12-09 15:40:35.443667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:16160008 cdw11:16001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.366 [2024-12-09 15:40:35.443686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.366 [2024-12-09 15:40:35.443762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:16ff0016 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.366 [2024-12-09 15:40:35.443778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.366 [2024-12-09 15:40:35.443869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:08080016 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.366 [2024-12-09 15:40:35.443898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.366 #47 NEW cov: 12392 ft: 14608 corp: 16/387b lim: 35 exec/s: 47 rss: 74Mb L: 30/34 MS: 1 CrossOver- 00:07:40.366 [2024-12-09 15:40:35.513862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4747000a cdw11:47004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.366 [2024-12-09 15:40:35.513890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.366 [2024-12-09 15:40:35.513981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:47470047 cdw11:47004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.366 [2024-12-09 15:40:35.513999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.366 [2024-12-09 15:40:35.514091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:47470047 cdw11:47004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.366 [2024-12-09 15:40:35.514109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.366 #48 NEW cov: 12392 ft: 14649 corp: 17/414b lim: 35 exec/s: 48 rss: 74Mb L: 27/34 MS: 1 ShuffleBytes- 00:07:40.366 [2024-12-09 15:40:35.563403] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:40.366 [2024-12-09 15:40:35.563865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.366 [2024-12-09 15:40:35.563897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.366 [2024-12-09 15:40:35.563992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.366 [2024-12-09 15:40:35.564013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.626 #49 NEW cov: 12392 ft: 14708 corp: 18/428b lim: 35 exec/s: 49 rss: 74Mb L: 14/34 MS: 1 EraseBytes- 00:07:40.626 [2024-12-09 15:40:35.634065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.626 [2024-12-09 15:40:35.634094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.626 [2024-12-09 15:40:35.634183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:085d0008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.626 [2024-12-09 15:40:35.634201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.626 #50 NEW cov: 12392 ft: 14723 corp: 19/448b lim: 35 exec/s: 50 rss: 74Mb L: 20/34 MS: 1 InsertByte- 00:07:40.626 [2024-12-09 15:40:35.684550] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:40.626 [2024-12-09 15:40:35.685060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4747000a cdw11:47004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.626 [2024-12-09 15:40:35.685094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.626 [2024-12-09 15:40:35.685183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:47470047 cdw11:47004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.626 [2024-12-09 15:40:35.685201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.626 [2024-12-09 15:40:35.685292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:47470047 cdw11:47004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.626 [2024-12-09 15:40:35.685309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.626 [2024-12-09 15:40:35.685407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:3eee00a3 cdw11:9400db0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.627 [2024-12-09 15:40:35.685425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.627 [2024-12-09 15:40:35.685515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:47470000 cdw11:47004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.627 [2024-12-09 15:40:35.685534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.627 #51 NEW cov: 12392 ft: 14892 corp: 20/483b lim: 35 exec/s: 51 rss: 74Mb L: 35/35 MS: 1 CMP- DE: "\243>\356\333\012\224R\000"- 00:07:40.627 [2024-12-09 15:40:35.745051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.627 [2024-12-09 15:40:35.745081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.627 [2024-12-09 15:40:35.745169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:08080008 cdw11:16000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.627 [2024-12-09 15:40:35.745187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.627 [2024-12-09 15:40:35.745272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:16160016 cdw11:3f00ff16 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.627 [2024-12-09 15:40:35.745289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.627 [2024-12-09 15:40:35.745379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:08080016 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.627 [2024-12-09 15:40:35.745396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.627 #52 NEW cov: 12392 ft: 14910 corp: 21/512b lim: 35 exec/s: 52 rss: 74Mb L: 29/35 MS: 1 ChangeByte- 00:07:40.627 [2024-12-09 15:40:35.815182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.627 [2024-12-09 15:40:35.815211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.627 [2024-12-09 15:40:35.815307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:16160008 cdw11:16001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.627 [2024-12-09 15:40:35.815323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.627 [2024-12-09 15:40:35.815433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:16160016 cdw11:16001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.627 [2024-12-09 15:40:35.815449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.627 [2024-12-09 15:40:35.815543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.627 [2024-12-09 15:40:35.815561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.627 #53 NEW cov: 12392 ft: 14919 corp: 22/540b lim: 35 exec/s: 53 rss: 74Mb L: 28/35 MS: 1 CopyPart- 00:07:40.887 [2024-12-09 15:40:35.864692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.887 [2024-12-09 15:40:35.864720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.887 [2024-12-09 15:40:35.864823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f70800fd cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.887 [2024-12-09 15:40:35.864842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.887 #54 NEW cov: 12392 ft: 14931 corp: 23/559b lim: 35 exec/s: 54 rss: 74Mb L: 19/35 MS: 1 ChangeBinInt- 00:07:40.887 [2024-12-09 15:40:35.935139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.887 [2024-12-09 15:40:35.935167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.887 [2024-12-09 15:40:35.935267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:085d0008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.887 [2024-12-09 15:40:35.935284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.887 #55 NEW cov: 12392 ft: 14965 corp: 24/579b lim: 35 exec/s: 55 rss: 74Mb L: 20/35 MS: 1 ChangeByte- 00:07:40.887 [2024-12-09 15:40:36.006018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08160008 cdw11:16001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.887 [2024-12-09 15:40:36.006045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.887 [2024-12-09 15:40:36.006135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.887 [2024-12-09 15:40:36.006151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.887 [2024-12-09 15:40:36.006244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:16160016 cdw11:16001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.887 [2024-12-09 15:40:36.006260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.887 [2024-12-09 15:40:36.006353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:16160016 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.887 [2024-12-09 15:40:36.006369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.887 #56 NEW cov: 12392 ft: 14999 corp: 25/610b lim: 35 exec/s: 56 rss: 74Mb L: 31/35 MS: 1 CopyPart- 00:07:40.887 [2024-12-09 15:40:36.075760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.887 [2024-12-09 15:40:36.075786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.887 [2024-12-09 15:40:36.075889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:08080040 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.887 [2024-12-09 15:40:36.075907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.887 #57 NEW cov: 12392 ft: 15060 corp: 26/630b lim: 35 exec/s: 57 rss: 74Mb L: 20/35 MS: 1 InsertByte- 00:07:41.146 [2024-12-09 15:40:36.127119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08160008 cdw11:16001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.146 [2024-12-09 15:40:36.127147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.146 [2024-12-09 15:40:36.127242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.146 [2024-12-09 15:40:36.127259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.146 [2024-12-09 15:40:36.127349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:16160016 cdw11:16001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.146 [2024-12-09 15:40:36.127366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.146 [2024-12-09 15:40:36.127457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:16cb0016 cdw11:cb00cbcb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.146 [2024-12-09 15:40:36.127474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.146 [2024-12-09 15:40:36.127560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:08080008 cdw11:08000816 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.146 [2024-12-09 15:40:36.127576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.146 #58 NEW cov: 12392 ft: 15081 corp: 27/665b lim: 35 exec/s: 58 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:41.146 [2024-12-09 15:40:36.196714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:ff000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.146 [2024-12-09 15:40:36.196740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.146 [2024-12-09 15:40:36.196835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:16080016 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.146 [2024-12-09 15:40:36.196856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.146 #59 NEW cov: 12392 ft: 15099 corp: 28/681b lim: 35 exec/s: 59 rss: 74Mb L: 16/35 MS: 1 EraseBytes- 00:07:41.146 [2024-12-09 15:40:36.247445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.146 [2024-12-09 15:40:36.247470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.146 [2024-12-09 15:40:36.247556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:08080008 cdw11:16000816 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.146 [2024-12-09 15:40:36.247574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.146 [2024-12-09 15:40:36.247658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:16160008 cdw11:3f001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.146 [2024-12-09 15:40:36.247675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.146 [2024-12-09 15:40:36.247763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:08080016 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.146 [2024-12-09 15:40:36.247779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.146 #60 NEW cov: 12392 ft: 15165 corp: 29/710b lim: 35 exec/s: 60 rss: 74Mb L: 29/35 MS: 1 ShuffleBytes- 00:07:41.147 [2024-12-09 15:40:36.297787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08160008 cdw11:16001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.147 [2024-12-09 15:40:36.297813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.147 [2024-12-09 15:40:36.297907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:08080008 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.147 [2024-12-09 15:40:36.297925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.147 [2024-12-09 15:40:36.298025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:30160016 cdw11:16001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.147 [2024-12-09 15:40:36.298041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.147 [2024-12-09 15:40:36.298132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:16160016 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.147 [2024-12-09 15:40:36.298148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.147 #61 NEW cov: 12392 ft: 15172 corp: 30/741b lim: 35 exec/s: 61 rss: 74Mb L: 31/35 MS: 1 ChangeByte- 00:07:41.147 [2024-12-09 15:40:36.348262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08160008 cdw11:2d00162d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.147 [2024-12-09 15:40:36.348289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.147 [2024-12-09 15:40:36.348388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:1616002d cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.147 [2024-12-09 15:40:36.348404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.147 [2024-12-09 15:40:36.348508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:08080008 cdw11:16000816 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.147 [2024-12-09 15:40:36.348525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.147 [2024-12-09 15:40:36.348613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:16160016 cdw11:16001616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.147 [2024-12-09 15:40:36.348632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.147 [2024-12-09 15:40:36.348718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:08080008 cdw11:08000816 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.147 [2024-12-09 15:40:36.348733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.147 #62 NEW cov: 12392 ft: 15200 corp: 31/776b lim: 35 exec/s: 31 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:41.147 #62 DONE cov: 12392 ft: 15200 corp: 31/776b lim: 35 exec/s: 31 rss: 74Mb 00:07:41.147 ###### Recommended dictionary. ###### 00:07:41.147 "\020\000" # Uses: 0 00:07:41.147 "\377\377\377\377" # Uses: 0 00:07:41.147 "\243>\356\333\012\224R\000" # Uses: 0 00:07:41.147 ###### End of recommended dictionary. ###### 00:07:41.147 Done 62 runs in 2 second(s) 00:07:41.406 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:07:41.406 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:41.406 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:41.406 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:41.406 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:41.406 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:41.406 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:41.406 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:41.406 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:41.406 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:41.406 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:41.406 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:07:41.406 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:07:41.406 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:41.406 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:41.407 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:41.407 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:41.407 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:41.407 15:40:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:07:41.407 [2024-12-09 15:40:36.530918] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:41.407 [2024-12-09 15:40:36.530999] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906090 ] 00:07:41.666 [2024-12-09 15:40:36.797817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.666 [2024-12-09 15:40:36.852466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.925 [2024-12-09 15:40:36.911762] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.925 [2024-12-09 15:40:36.927914] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:41.925 INFO: Running with entropic power schedule (0xFF, 100). 00:07:41.925 INFO: Seed: 3246701424 00:07:41.925 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:07:41.925 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:07:41.925 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:41.925 INFO: A corpus is not provided, starting from an empty corpus 00:07:41.925 #2 INITED exec/s: 0 rss: 66Mb 00:07:41.925 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:41.925 This may also happen if the target rejected all inputs we tried so far 00:07:42.184 NEW_FUNC[1/705]: 0x440c78 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:42.184 NEW_FUNC[2/705]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:42.184 #4 NEW cov: 12049 ft: 12049 corp: 2/8b lim: 20 exec/s: 0 rss: 73Mb L: 7/7 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:42.184 #5 NEW cov: 12176 ft: 13016 corp: 3/16b lim: 20 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 CrossOver- 00:07:42.443 #9 NEW cov: 12182 ft: 13271 corp: 4/27b lim: 20 exec/s: 0 rss: 73Mb L: 11/11 MS: 4 CrossOver-ChangeBinInt-CopyPart-CMP- DE: "\377\377\377\377\377\377\377\007"- 00:07:42.443 #10 NEW cov: 12267 ft: 13482 corp: 5/38b lim: 20 exec/s: 0 rss: 73Mb L: 11/11 MS: 1 CrossOver- 00:07:42.443 #11 NEW cov: 12284 ft: 14018 corp: 6/57b lim: 20 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\007"- 00:07:42.443 #12 NEW cov: 12284 ft: 14116 corp: 7/64b lim: 20 exec/s: 0 rss: 74Mb L: 7/19 MS: 1 EraseBytes- 00:07:42.701 #13 NEW cov: 12284 ft: 14163 corp: 8/83b lim: 20 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 CopyPart- 00:07:42.701 #14 NEW cov: 12284 ft: 14188 corp: 9/91b lim: 20 exec/s: 0 rss: 74Mb L: 8/19 MS: 1 ChangeByte- 00:07:42.701 #18 NEW cov: 12284 ft: 14237 corp: 10/95b lim: 20 exec/s: 0 rss: 74Mb L: 4/19 MS: 4 ChangeByte-CopyPart-InsertByte-InsertByte- 00:07:42.701 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:42.701 #19 NEW cov: 12307 ft: 14313 corp: 11/106b lim: 20 exec/s: 0 rss: 74Mb L: 11/19 MS: 1 ChangeByte- 00:07:42.701 #20 NEW cov: 12307 ft: 14382 corp: 12/110b lim: 20 exec/s: 0 rss: 74Mb L: 4/19 MS: 1 EraseBytes- 00:07:42.959 #21 NEW cov: 12307 ft: 14403 corp: 13/129b lim: 20 exec/s: 21 rss: 74Mb L: 19/19 MS: 1 ChangeByte- 00:07:42.960 NEW_FUNC[1/4]: 0x137b7e8 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3484 00:07:42.960 NEW_FUNC[2/4]: 0x137c368 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3426 00:07:42.960 #22 NEW cov: 12370 ft: 14570 corp: 14/142b lim: 20 exec/s: 22 rss: 74Mb L: 13/19 MS: 1 CMP- DE: "\000\001"- 00:07:42.960 #23 NEW cov: 12370 ft: 14631 corp: 15/146b lim: 20 exec/s: 23 rss: 74Mb L: 4/19 MS: 1 PersAutoDict- DE: "\000\001"- 00:07:42.960 #24 NEW cov: 12370 ft: 14651 corp: 16/157b lim: 20 exec/s: 24 rss: 74Mb L: 11/19 MS: 1 ChangeBinInt- 00:07:43.219 #25 NEW cov: 12370 ft: 14666 corp: 17/168b lim: 20 exec/s: 25 rss: 74Mb L: 11/19 MS: 1 ChangeBit- 00:07:43.219 #26 NEW cov: 12370 ft: 14684 corp: 18/173b lim: 20 exec/s: 26 rss: 74Mb L: 5/19 MS: 1 InsertByte- 00:07:43.219 #27 NEW cov: 12370 ft: 14710 corp: 19/192b lim: 20 exec/s: 27 rss: 74Mb L: 19/19 MS: 1 CMP- DE: "\377\377\377\027"- 00:07:43.219 #28 NEW cov: 12370 ft: 14744 corp: 20/203b lim: 20 exec/s: 28 rss: 74Mb L: 11/19 MS: 1 ChangeBit- 00:07:43.478 #29 NEW cov: 12370 ft: 14752 corp: 21/214b lim: 20 exec/s: 29 rss: 74Mb L: 11/19 MS: 1 ChangeBit- 00:07:43.478 #30 NEW cov: 12370 ft: 14791 corp: 22/227b lim: 20 exec/s: 30 rss: 74Mb L: 13/19 MS: 1 EraseBytes- 00:07:43.478 #31 NEW cov: 12370 ft: 14809 corp: 23/236b lim: 20 exec/s: 31 rss: 74Mb L: 9/19 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\007"- 00:07:43.478 #32 NEW cov: 12370 ft: 14850 corp: 24/250b lim: 20 exec/s: 32 rss: 74Mb L: 14/19 MS: 1 CopyPart- 00:07:43.738 #33 NEW cov: 12370 ft: 14880 corp: 25/264b lim: 20 exec/s: 33 rss: 74Mb L: 14/19 MS: 1 ChangeByte- 00:07:43.738 #34 NEW cov: 12370 ft: 14895 corp: 26/273b lim: 20 exec/s: 34 rss: 74Mb L: 9/19 MS: 1 CrossOver- 00:07:43.738 #35 NEW cov: 12370 ft: 14904 corp: 27/278b lim: 20 exec/s: 35 rss: 74Mb L: 5/19 MS: 1 InsertByte- 00:07:43.738 #36 NEW cov: 12370 ft: 14961 corp: 28/298b lim: 20 exec/s: 36 rss: 74Mb L: 20/20 MS: 1 InsertByte- 00:07:43.738 #37 NEW cov: 12370 ft: 14964 corp: 29/317b lim: 20 exec/s: 37 rss: 74Mb L: 19/20 MS: 1 ChangeBinInt- 00:07:43.997 #38 NEW cov: 12370 ft: 14991 corp: 30/327b lim: 20 exec/s: 19 rss: 75Mb L: 10/20 MS: 1 EraseBytes- 00:07:43.997 #38 DONE cov: 12370 ft: 14991 corp: 30/327b lim: 20 exec/s: 19 rss: 75Mb 00:07:43.997 ###### Recommended dictionary. ###### 00:07:43.997 "\377\377\377\377\377\377\377\007" # Uses: 2 00:07:43.997 "\000\001" # Uses: 1 00:07:43.997 "\377\377\377\027" # Uses: 0 00:07:43.997 ###### End of recommended dictionary. ###### 00:07:43.997 Done 38 runs in 2 second(s) 00:07:43.997 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:07:43.997 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:43.997 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:43.997 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:43.997 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:43.997 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:43.997 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:43.997 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:43.997 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:43.997 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:43.997 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:43.997 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:07:43.997 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:07:43.997 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:43.997 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:43.997 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:43.998 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:43.998 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:43.998 15:40:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:07:43.998 [2024-12-09 15:40:39.158030] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:43.998 [2024-12-09 15:40:39.158103] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906453 ] 00:07:44.257 [2024-12-09 15:40:39.423942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.257 [2024-12-09 15:40:39.478353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.516 [2024-12-09 15:40:39.537591] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.516 [2024-12-09 15:40:39.553750] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:44.516 INFO: Running with entropic power schedule (0xFF, 100). 00:07:44.516 INFO: Seed: 1577737445 00:07:44.516 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:07:44.516 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:07:44.516 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:44.516 INFO: A corpus is not provided, starting from an empty corpus 00:07:44.516 #2 INITED exec/s: 0 rss: 67Mb 00:07:44.516 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:44.516 This may also happen if the target rejected all inputs we tried so far 00:07:44.516 [2024-12-09 15:40:39.599478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.516 [2024-12-09 15:40:39.599508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.516 [2024-12-09 15:40:39.599562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.516 [2024-12-09 15:40:39.599576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.516 [2024-12-09 15:40:39.599629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.516 [2024-12-09 15:40:39.599642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.775 NEW_FUNC[1/717]: 0x441d78 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:44.775 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:44.775 #3 NEW cov: 12158 ft: 12175 corp: 2/25b lim: 35 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:07:44.775 [2024-12-09 15:40:39.930051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.775 [2024-12-09 15:40:39.930092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.775 #9 NEW cov: 12289 ft: 13522 corp: 3/37b lim: 35 exec/s: 0 rss: 74Mb L: 12/24 MS: 1 CrossOver- 00:07:44.775 [2024-12-09 15:40:39.990134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.775 [2024-12-09 15:40:39.990164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.034 #15 NEW cov: 12295 ft: 13707 corp: 4/46b lim: 35 exec/s: 0 rss: 74Mb L: 9/24 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\020"- 00:07:45.034 [2024-12-09 15:40:40.030231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.034 [2024-12-09 15:40:40.030260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.034 #16 NEW cov: 12380 ft: 13930 corp: 5/58b lim: 35 exec/s: 0 rss: 74Mb L: 12/24 MS: 1 ShuffleBytes- 00:07:45.034 [2024-12-09 15:40:40.090376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.034 [2024-12-09 15:40:40.090403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.034 #17 NEW cov: 12380 ft: 13998 corp: 6/67b lim: 35 exec/s: 0 rss: 74Mb L: 9/24 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\020"- 00:07:45.034 [2024-12-09 15:40:40.150570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.034 [2024-12-09 15:40:40.150595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.034 #18 NEW cov: 12380 ft: 14086 corp: 7/74b lim: 35 exec/s: 0 rss: 74Mb L: 7/24 MS: 1 InsertRepeatedBytes- 00:07:45.034 [2024-12-09 15:40:40.190950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.034 [2024-12-09 15:40:40.190976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.034 [2024-12-09 15:40:40.191048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.034 [2024-12-09 15:40:40.191062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.034 [2024-12-09 15:40:40.191116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.034 [2024-12-09 15:40:40.191129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.034 #19 NEW cov: 12380 ft: 14141 corp: 8/98b lim: 35 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 CopyPart- 00:07:45.034 [2024-12-09 15:40:40.230738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.034 [2024-12-09 15:40:40.230763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.293 #20 NEW cov: 12380 ft: 14148 corp: 9/105b lim: 35 exec/s: 0 rss: 74Mb L: 7/24 MS: 1 ChangeBit- 00:07:45.293 [2024-12-09 15:40:40.291122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.293 [2024-12-09 15:40:40.291147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.293 [2024-12-09 15:40:40.291200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.293 [2024-12-09 15:40:40.291214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.293 #21 NEW cov: 12380 ft: 14409 corp: 10/121b lim: 35 exec/s: 0 rss: 74Mb L: 16/24 MS: 1 CrossOver- 00:07:45.294 [2024-12-09 15:40:40.331322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.294 [2024-12-09 15:40:40.331347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.294 [2024-12-09 15:40:40.331398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.294 [2024-12-09 15:40:40.331413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.294 [2024-12-09 15:40:40.331465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.294 [2024-12-09 15:40:40.331479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.294 #22 NEW cov: 12380 ft: 14516 corp: 11/146b lim: 35 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 InsertByte- 00:07:45.294 [2024-12-09 15:40:40.371176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.294 [2024-12-09 15:40:40.371201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.294 #23 NEW cov: 12380 ft: 14592 corp: 12/158b lim: 35 exec/s: 0 rss: 74Mb L: 12/25 MS: 1 CrossOver- 00:07:45.294 [2024-12-09 15:40:40.431620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.294 [2024-12-09 15:40:40.431645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.294 [2024-12-09 15:40:40.431698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.294 [2024-12-09 15:40:40.431711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.294 [2024-12-09 15:40:40.431763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.294 [2024-12-09 15:40:40.431777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.294 #24 NEW cov: 12380 ft: 14626 corp: 13/185b lim: 35 exec/s: 0 rss: 74Mb L: 27/27 MS: 1 CrossOver- 00:07:45.294 [2024-12-09 15:40:40.491811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.294 [2024-12-09 15:40:40.491836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.294 [2024-12-09 15:40:40.491910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.294 [2024-12-09 15:40:40.491938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.294 [2024-12-09 15:40:40.491990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.294 [2024-12-09 15:40:40.492003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.294 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:45.294 #25 NEW cov: 12403 ft: 14665 corp: 14/209b lim: 35 exec/s: 0 rss: 74Mb L: 24/27 MS: 1 ChangeBit- 00:07:45.553 [2024-12-09 15:40:40.531854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.553 [2024-12-09 15:40:40.531879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.553 [2024-12-09 15:40:40.531934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.553 [2024-12-09 15:40:40.531949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.553 [2024-12-09 15:40:40.532004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.553 [2024-12-09 15:40:40.532018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.553 #26 NEW cov: 12403 ft: 14678 corp: 15/233b lim: 35 exec/s: 0 rss: 74Mb L: 24/27 MS: 1 ShuffleBytes- 00:07:45.553 [2024-12-09 15:40:40.591775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff0a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.553 [2024-12-09 15:40:40.591800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.553 #27 NEW cov: 12403 ft: 14699 corp: 16/245b lim: 35 exec/s: 27 rss: 74Mb L: 12/27 MS: 1 InsertRepeatedBytes- 00:07:45.553 [2024-12-09 15:40:40.631850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:0cff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.553 [2024-12-09 15:40:40.631875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.553 #28 NEW cov: 12403 ft: 14747 corp: 17/257b lim: 35 exec/s: 28 rss: 74Mb L: 12/27 MS: 1 ChangeBinInt- 00:07:45.553 [2024-12-09 15:40:40.671980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff28ffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.553 [2024-12-09 15:40:40.672005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.553 #29 NEW cov: 12403 ft: 14774 corp: 18/270b lim: 35 exec/s: 29 rss: 74Mb L: 13/27 MS: 1 InsertByte- 00:07:45.553 [2024-12-09 15:40:40.712402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.553 [2024-12-09 15:40:40.712427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.553 [2024-12-09 15:40:40.712481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fff7ffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.553 [2024-12-09 15:40:40.712495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.553 [2024-12-09 15:40:40.712548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.553 [2024-12-09 15:40:40.712565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.553 #30 NEW cov: 12403 ft: 14790 corp: 19/297b lim: 35 exec/s: 30 rss: 74Mb L: 27/27 MS: 1 ChangeBinInt- 00:07:45.553 [2024-12-09 15:40:40.772268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff0a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.553 [2024-12-09 15:40:40.772293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.813 #31 NEW cov: 12403 ft: 14840 corp: 20/309b lim: 35 exec/s: 31 rss: 74Mb L: 12/27 MS: 1 ChangeByte- 00:07:45.813 [2024-12-09 15:40:40.832437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a7f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.813 [2024-12-09 15:40:40.832462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.813 #32 NEW cov: 12403 ft: 14852 corp: 21/318b lim: 35 exec/s: 32 rss: 74Mb L: 9/27 MS: 1 CMP- DE: "\177\000\000\000"- 00:07:45.813 [2024-12-09 15:40:40.872711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.813 [2024-12-09 15:40:40.872737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.813 [2024-12-09 15:40:40.872791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.813 [2024-12-09 15:40:40.872805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.813 #33 NEW cov: 12403 ft: 14862 corp: 22/337b lim: 35 exec/s: 33 rss: 74Mb L: 19/27 MS: 1 CrossOver- 00:07:45.813 [2024-12-09 15:40:40.913151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.813 [2024-12-09 15:40:40.913178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.813 [2024-12-09 15:40:40.913233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.813 [2024-12-09 15:40:40.913247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.813 [2024-12-09 15:40:40.913300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.813 [2024-12-09 15:40:40.913313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.813 [2024-12-09 15:40:40.913367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.813 [2024-12-09 15:40:40.913381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.813 #34 NEW cov: 12403 ft: 15219 corp: 23/370b lim: 35 exec/s: 34 rss: 75Mb L: 33/33 MS: 1 CopyPart- 00:07:45.813 [2024-12-09 15:40:40.973311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.813 [2024-12-09 15:40:40.973338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.813 [2024-12-09 15:40:40.973391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:000a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.813 [2024-12-09 15:40:40.973406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.813 [2024-12-09 15:40:40.973463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.813 [2024-12-09 15:40:40.973477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.813 [2024-12-09 15:40:40.973533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:10ffff00 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.813 [2024-12-09 15:40:40.973548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.813 #35 NEW cov: 12403 ft: 15256 corp: 24/398b lim: 35 exec/s: 35 rss: 75Mb L: 28/33 MS: 1 CrossOver- 00:07:45.813 [2024-12-09 15:40:41.033335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.813 [2024-12-09 15:40:41.033361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.813 [2024-12-09 15:40:41.033432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.813 [2024-12-09 15:40:41.033446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.813 [2024-12-09 15:40:41.033499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.813 [2024-12-09 15:40:41.033512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.072 #36 NEW cov: 12403 ft: 15269 corp: 25/423b lim: 35 exec/s: 36 rss: 75Mb L: 25/33 MS: 1 ChangeByte- 00:07:46.072 [2024-12-09 15:40:41.093171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a26 cdw11:007f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.072 [2024-12-09 15:40:41.093198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.072 #40 NEW cov: 12403 ft: 15312 corp: 26/433b lim: 35 exec/s: 40 rss: 75Mb L: 10/33 MS: 4 EraseBytes-InsertByte-ChangeByte-PersAutoDict- DE: "\177\000\000\000"- 00:07:46.072 [2024-12-09 15:40:41.133419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7f00ffff cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.072 [2024-12-09 15:40:41.133443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.072 [2024-12-09 15:40:41.133514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0a08ffff cdw11:00280000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.072 [2024-12-09 15:40:41.133528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.072 #41 NEW cov: 12403 ft: 15329 corp: 27/449b lim: 35 exec/s: 41 rss: 75Mb L: 16/33 MS: 1 PersAutoDict- DE: "\177\000\000\000"- 00:07:46.072 [2024-12-09 15:40:41.193934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ecececec cdw11:ecec0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.072 [2024-12-09 15:40:41.193960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.072 [2024-12-09 15:40:41.194012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ecececec cdw11:ecec0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.072 [2024-12-09 15:40:41.194026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.072 [2024-12-09 15:40:41.194082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ecececec cdw11:ecec0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.072 [2024-12-09 15:40:41.194098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.072 [2024-12-09 15:40:41.194150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ecececec cdw11:ecec0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.072 [2024-12-09 15:40:41.194164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.072 #45 NEW cov: 12403 ft: 15394 corp: 28/482b lim: 35 exec/s: 45 rss: 75Mb L: 33/33 MS: 4 ChangeBinInt-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:07:46.072 [2024-12-09 15:40:41.233574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a00ffff cdw11:00f50000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.072 [2024-12-09 15:40:41.233600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.072 #46 NEW cov: 12403 ft: 15435 corp: 29/494b lim: 35 exec/s: 46 rss: 75Mb L: 12/33 MS: 1 ChangeBinInt- 00:07:46.072 [2024-12-09 15:40:41.273887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a26 cdw11:007f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.072 [2024-12-09 15:40:41.273911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.072 [2024-12-09 15:40:41.273966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.072 [2024-12-09 15:40:41.273981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.332 #47 NEW cov: 12403 ft: 15470 corp: 30/512b lim: 35 exec/s: 47 rss: 75Mb L: 18/33 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\020"- 00:07:46.332 [2024-12-09 15:40:41.333895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.332 [2024-12-09 15:40:41.333931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.332 #48 NEW cov: 12403 ft: 15484 corp: 31/524b lim: 35 exec/s: 48 rss: 75Mb L: 12/33 MS: 1 PersAutoDict- DE: "\177\000\000\000"- 00:07:46.332 [2024-12-09 15:40:41.374254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.332 [2024-12-09 15:40:41.374279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.332 [2024-12-09 15:40:41.374330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.332 [2024-12-09 15:40:41.374344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.332 [2024-12-09 15:40:41.374394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.332 [2024-12-09 15:40:41.374408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.332 #49 NEW cov: 12403 ft: 15498 corp: 32/548b lim: 35 exec/s: 49 rss: 75Mb L: 24/33 MS: 1 InsertRepeatedBytes- 00:07:46.332 [2024-12-09 15:40:41.414103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a7f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.332 [2024-12-09 15:40:41.414127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.332 #50 NEW cov: 12403 ft: 15542 corp: 33/561b lim: 35 exec/s: 50 rss: 75Mb L: 13/33 MS: 1 CopyPart- 00:07:46.332 [2024-12-09 15:40:41.474433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.332 [2024-12-09 15:40:41.474470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.332 [2024-12-09 15:40:41.474523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.332 [2024-12-09 15:40:41.474536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.332 #51 NEW cov: 12403 ft: 15585 corp: 34/580b lim: 35 exec/s: 51 rss: 75Mb L: 19/33 MS: 1 ShuffleBytes- 00:07:46.332 [2024-12-09 15:40:41.534439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.332 [2024-12-09 15:40:41.534463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.332 #52 NEW cov: 12403 ft: 15598 corp: 35/592b lim: 35 exec/s: 52 rss: 75Mb L: 12/33 MS: 1 ChangeByte- 00:07:46.592 [2024-12-09 15:40:41.574687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.592 [2024-12-09 15:40:41.574712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.592 [2024-12-09 15:40:41.574764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.592 [2024-12-09 15:40:41.574778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.592 #53 NEW cov: 12403 ft: 15612 corp: 36/608b lim: 35 exec/s: 26 rss: 75Mb L: 16/33 MS: 1 CopyPart- 00:07:46.592 #53 DONE cov: 12403 ft: 15612 corp: 36/608b lim: 35 exec/s: 26 rss: 75Mb 00:07:46.592 ###### Recommended dictionary. ###### 00:07:46.592 "\001\000\000\000\000\000\000\020" # Uses: 2 00:07:46.592 "\177\000\000\000" # Uses: 3 00:07:46.592 ###### End of recommended dictionary. ###### 00:07:46.592 Done 53 runs in 2 second(s) 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:46.592 15:40:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:07:46.592 [2024-12-09 15:40:41.751452] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:46.592 [2024-12-09 15:40:41.751528] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906806 ] 00:07:46.851 [2024-12-09 15:40:42.017637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.851 [2024-12-09 15:40:42.070933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.111 [2024-12-09 15:40:42.130290] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.111 [2024-12-09 15:40:42.146439] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:47.111 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.111 INFO: Seed: 4169706498 00:07:47.111 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:07:47.111 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:07:47.111 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:47.111 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.111 #2 INITED exec/s: 0 rss: 66Mb 00:07:47.111 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.111 This may also happen if the target rejected all inputs we tried so far 00:07:47.111 [2024-12-09 15:40:42.191437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.111 [2024-12-09 15:40:42.191473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.111 [2024-12-09 15:40:42.191523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.111 [2024-12-09 15:40:42.191540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.374 NEW_FUNC[1/717]: 0x443f18 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:47.374 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:47.374 #9 NEW cov: 12187 ft: 12185 corp: 2/19b lim: 45 exec/s: 0 rss: 73Mb L: 18/18 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:47.374 [2024-12-09 15:40:42.552421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.374 [2024-12-09 15:40:42.552465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.374 [2024-12-09 15:40:42.552515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.374 [2024-12-09 15:40:42.552532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.374 [2024-12-09 15:40:42.552562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.374 [2024-12-09 15:40:42.552579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.711 #10 NEW cov: 12300 ft: 12991 corp: 3/47b lim: 45 exec/s: 0 rss: 73Mb L: 28/28 MS: 1 CopyPart- 00:07:47.711 [2024-12-09 15:40:42.652505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.711 [2024-12-09 15:40:42.652543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.711 [2024-12-09 15:40:42.652576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.711 [2024-12-09 15:40:42.652598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.711 #11 NEW cov: 12306 ft: 13428 corp: 4/71b lim: 45 exec/s: 0 rss: 73Mb L: 24/28 MS: 1 CopyPart- 00:07:47.711 [2024-12-09 15:40:42.712708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.711 [2024-12-09 15:40:42.712739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.711 [2024-12-09 15:40:42.712788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff1e1e cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.711 [2024-12-09 15:40:42.712805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.711 [2024-12-09 15:40:42.712835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.711 [2024-12-09 15:40:42.712858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.711 [2024-12-09 15:40:42.712888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.711 [2024-12-09 15:40:42.712904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.711 #12 NEW cov: 12391 ft: 14078 corp: 5/110b lim: 45 exec/s: 0 rss: 73Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:47.711 [2024-12-09 15:40:42.802981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.711 [2024-12-09 15:40:42.803012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.711 [2024-12-09 15:40:42.803046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.711 [2024-12-09 15:40:42.803070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.711 [2024-12-09 15:40:42.803100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:75751e1e cdw11:75750003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.711 [2024-12-09 15:40:42.803133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.711 [2024-12-09 15:40:42.803163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:751e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.711 [2024-12-09 15:40:42.803179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.711 #13 NEW cov: 12391 ft: 14148 corp: 6/146b lim: 45 exec/s: 0 rss: 73Mb L: 36/39 MS: 1 InsertRepeatedBytes- 00:07:47.711 [2024-12-09 15:40:42.863072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.711 [2024-12-09 15:40:42.863103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.711 [2024-12-09 15:40:42.863141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.711 [2024-12-09 15:40:42.863161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.711 [2024-12-09 15:40:42.863190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.711 [2024-12-09 15:40:42.863207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.711 #14 NEW cov: 12391 ft: 14251 corp: 7/180b lim: 45 exec/s: 0 rss: 73Mb L: 34/39 MS: 1 InsertRepeatedBytes- 00:07:48.050 [2024-12-09 15:40:42.923177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.050 [2024-12-09 15:40:42.923209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.050 [2024-12-09 15:40:42.923244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.050 [2024-12-09 15:40:42.923268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.050 #15 NEW cov: 12391 ft: 14355 corp: 8/199b lim: 45 exec/s: 0 rss: 73Mb L: 19/39 MS: 1 InsertByte- 00:07:48.050 [2024-12-09 15:40:42.983426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.050 [2024-12-09 15:40:42.983457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.050 [2024-12-09 15:40:42.983491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.050 [2024-12-09 15:40:42.983515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.050 [2024-12-09 15:40:42.983545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.050 [2024-12-09 15:40:42.983560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.050 [2024-12-09 15:40:42.983590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.050 [2024-12-09 15:40:42.983606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.050 #16 NEW cov: 12391 ft: 14391 corp: 9/238b lim: 45 exec/s: 0 rss: 73Mb L: 39/39 MS: 1 CopyPart- 00:07:48.050 [2024-12-09 15:40:43.073621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:2a1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.050 [2024-12-09 15:40:43.073654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.050 [2024-12-09 15:40:43.073688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.050 [2024-12-09 15:40:43.073712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.050 [2024-12-09 15:40:43.073742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.050 [2024-12-09 15:40:43.073759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.050 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:48.050 #17 NEW cov: 12408 ft: 14458 corp: 10/266b lim: 45 exec/s: 0 rss: 74Mb L: 28/39 MS: 1 ChangeByte- 00:07:48.050 [2024-12-09 15:40:43.163872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.050 [2024-12-09 15:40:43.163905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.050 [2024-12-09 15:40:43.163939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e1e3e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.050 [2024-12-09 15:40:43.163964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.050 [2024-12-09 15:40:43.163994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.050 [2024-12-09 15:40:43.164011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.050 #18 NEW cov: 12408 ft: 14504 corp: 11/294b lim: 45 exec/s: 18 rss: 74Mb L: 28/39 MS: 1 ChangeBit- 00:07:48.050 [2024-12-09 15:40:43.224086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.050 [2024-12-09 15:40:43.224118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.050 [2024-12-09 15:40:43.224152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.050 [2024-12-09 15:40:43.224177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.050 [2024-12-09 15:40:43.224208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:1e3e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.050 [2024-12-09 15:40:43.224224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.050 [2024-12-09 15:40:43.224254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.050 [2024-12-09 15:40:43.224270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.381 #19 NEW cov: 12408 ft: 14531 corp: 12/334b lim: 45 exec/s: 19 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:07:48.381 [2024-12-09 15:40:43.324254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.381 [2024-12-09 15:40:43.324286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.381 [2024-12-09 15:40:43.324320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.381 [2024-12-09 15:40:43.324345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.381 [2024-12-09 15:40:43.324374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.381 [2024-12-09 15:40:43.324406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.381 #20 NEW cov: 12408 ft: 14586 corp: 13/362b lim: 45 exec/s: 20 rss: 74Mb L: 28/40 MS: 1 ChangeBit- 00:07:48.381 [2024-12-09 15:40:43.374382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.381 [2024-12-09 15:40:43.374421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.381 [2024-12-09 15:40:43.374455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.381 [2024-12-09 15:40:43.374480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.381 [2024-12-09 15:40:43.374510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.381 [2024-12-09 15:40:43.374526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.381 #21 NEW cov: 12408 ft: 14603 corp: 14/396b lim: 45 exec/s: 21 rss: 74Mb L: 34/40 MS: 1 ChangeBit- 00:07:48.381 [2024-12-09 15:40:43.434598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.381 [2024-12-09 15:40:43.434629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.381 [2024-12-09 15:40:43.434663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.381 [2024-12-09 15:40:43.434688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.381 [2024-12-09 15:40:43.434718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:1e1e1e21 cdw11:3e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.381 [2024-12-09 15:40:43.434734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.381 [2024-12-09 15:40:43.434763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.381 [2024-12-09 15:40:43.434779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.381 #22 NEW cov: 12408 ft: 14629 corp: 15/437b lim: 45 exec/s: 22 rss: 74Mb L: 41/41 MS: 1 InsertByte- 00:07:48.381 [2024-12-09 15:40:43.524836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.381 [2024-12-09 15:40:43.524874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.382 [2024-12-09 15:40:43.524909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.382 [2024-12-09 15:40:43.524925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.382 [2024-12-09 15:40:43.524955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:211e1e1e cdw11:1e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.382 [2024-12-09 15:40:43.524971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.382 [2024-12-09 15:40:43.525000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.382 [2024-12-09 15:40:43.525016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.642 #23 NEW cov: 12408 ft: 14649 corp: 16/479b lim: 45 exec/s: 23 rss: 74Mb L: 42/42 MS: 1 InsertByte- 00:07:48.642 [2024-12-09 15:40:43.615117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.642 [2024-12-09 15:40:43.615155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.642 [2024-12-09 15:40:43.615192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e1e1e1e cdw11:1e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.642 [2024-12-09 15:40:43.615215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.642 [2024-12-09 15:40:43.615247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00290000 cdw11:3e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.642 [2024-12-09 15:40:43.615265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.642 [2024-12-09 15:40:43.615296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.642 [2024-12-09 15:40:43.615313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.642 #24 NEW cov: 12408 ft: 14675 corp: 17/520b lim: 45 exec/s: 24 rss: 74Mb L: 41/42 MS: 1 ChangeBinInt- 00:07:48.642 [2024-12-09 15:40:43.675256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.642 [2024-12-09 15:40:43.675288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.642 [2024-12-09 15:40:43.675322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.642 [2024-12-09 15:40:43.675338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.642 [2024-12-09 15:40:43.675368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:75751e1e cdw11:75750003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.642 [2024-12-09 15:40:43.675384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.642 [2024-12-09 15:40:43.675413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:751e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.642 [2024-12-09 15:40:43.675429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.642 #25 NEW cov: 12408 ft: 14693 corp: 18/556b lim: 45 exec/s: 25 rss: 74Mb L: 36/42 MS: 1 ChangeByte- 00:07:48.642 [2024-12-09 15:40:43.765241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.642 [2024-12-09 15:40:43.765271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.642 #26 NEW cov: 12408 ft: 15438 corp: 19/569b lim: 45 exec/s: 26 rss: 74Mb L: 13/42 MS: 1 EraseBytes- 00:07:48.642 [2024-12-09 15:40:43.865765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.642 [2024-12-09 15:40:43.865797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.642 [2024-12-09 15:40:43.865831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.642 [2024-12-09 15:40:43.865864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.642 [2024-12-09 15:40:43.865896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.642 [2024-12-09 15:40:43.865917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.642 [2024-12-09 15:40:43.865947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.642 [2024-12-09 15:40:43.865963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.902 #27 NEW cov: 12408 ft: 15475 corp: 20/608b lim: 45 exec/s: 27 rss: 74Mb L: 39/42 MS: 1 CopyPart- 00:07:48.902 [2024-12-09 15:40:43.955866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.902 [2024-12-09 15:40:43.955896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.902 [2024-12-09 15:40:43.955944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.902 [2024-12-09 15:40:43.955961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.902 [2024-12-09 15:40:43.955990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.902 [2024-12-09 15:40:43.956006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.902 #28 NEW cov: 12408 ft: 15491 corp: 21/636b lim: 45 exec/s: 28 rss: 74Mb L: 28/42 MS: 1 ShuffleBytes- 00:07:48.902 [2024-12-09 15:40:44.006048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.902 [2024-12-09 15:40:44.006078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.902 [2024-12-09 15:40:44.006112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e1e3e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.902 [2024-12-09 15:40:44.006128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.902 [2024-12-09 15:40:44.006157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.902 [2024-12-09 15:40:44.006173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.902 #29 NEW cov: 12408 ft: 15520 corp: 22/664b lim: 45 exec/s: 29 rss: 74Mb L: 28/42 MS: 1 CopyPart- 00:07:48.902 [2024-12-09 15:40:44.066140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.902 [2024-12-09 15:40:44.066170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.902 [2024-12-09 15:40:44.066203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.902 [2024-12-09 15:40:44.066219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.902 #30 NEW cov: 12415 ft: 15529 corp: 23/690b lim: 45 exec/s: 30 rss: 74Mb L: 26/42 MS: 1 EraseBytes- 00:07:48.902 [2024-12-09 15:40:44.116265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1ee21e1e cdw11:e21e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.902 [2024-12-09 15:40:44.116296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.902 [2024-12-09 15:40:44.116337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e1e1e1e cdw11:1e1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.902 [2024-12-09 15:40:44.116356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.162 #31 NEW cov: 12415 ft: 15564 corp: 24/716b lim: 45 exec/s: 15 rss: 74Mb L: 26/42 MS: 1 ChangeBinInt- 00:07:49.162 #31 DONE cov: 12415 ft: 15564 corp: 24/716b lim: 45 exec/s: 15 rss: 74Mb 00:07:49.162 Done 31 runs in 2 second(s) 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:49.162 15:40:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:07:49.162 [2024-12-09 15:40:44.347100] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:49.162 [2024-12-09 15:40:44.347185] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907176 ] 00:07:49.421 [2024-12-09 15:40:44.613684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.681 [2024-12-09 15:40:44.667903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.681 [2024-12-09 15:40:44.727170] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.681 [2024-12-09 15:40:44.743314] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:49.681 INFO: Running with entropic power schedule (0xFF, 100). 00:07:49.681 INFO: Seed: 2470758874 00:07:49.681 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:07:49.681 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:07:49.681 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:49.681 INFO: A corpus is not provided, starting from an empty corpus 00:07:49.681 #2 INITED exec/s: 0 rss: 66Mb 00:07:49.681 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:49.681 This may also happen if the target rejected all inputs we tried so far 00:07:49.681 [2024-12-09 15:40:44.792280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:49.681 [2024-12-09 15:40:44.792309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.941 NEW_FUNC[1/715]: 0x446728 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:49.941 NEW_FUNC[2/715]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:49.941 #3 NEW cov: 12103 ft: 12094 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CrossOver- 00:07:49.941 [2024-12-09 15:40:45.123342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:49.941 [2024-12-09 15:40:45.123378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.941 [2024-12-09 15:40:45.123431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00005858 cdw11:00000000 00:07:49.941 [2024-12-09 15:40:45.123445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.941 #4 NEW cov: 12217 ft: 12834 corp: 3/8b lim: 10 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:50.200 [2024-12-09 15:40:45.183293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a58 cdw11:00000000 00:07:50.200 [2024-12-09 15:40:45.183323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.200 #5 NEW cov: 12223 ft: 13065 corp: 4/10b lim: 10 exec/s: 0 rss: 74Mb L: 2/5 MS: 1 CrossOver- 00:07:50.200 [2024-12-09 15:40:45.223726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000058a7 cdw11:00000000 00:07:50.200 [2024-12-09 15:40:45.223752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.200 [2024-12-09 15:40:45.223804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000a7a7 cdw11:00000000 00:07:50.200 [2024-12-09 15:40:45.223818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.200 [2024-12-09 15:40:45.223873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000a7a7 cdw11:00000000 00:07:50.200 [2024-12-09 15:40:45.223888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.200 [2024-12-09 15:40:45.223940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000a7a7 cdw11:00000000 00:07:50.200 [2024-12-09 15:40:45.223954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.200 #7 NEW cov: 12308 ft: 13683 corp: 5/19b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:50.200 [2024-12-09 15:40:45.283669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000be0a cdw11:00000000 00:07:50.200 [2024-12-09 15:40:45.283694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.200 [2024-12-09 15:40:45.283749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00005858 cdw11:00000000 00:07:50.200 [2024-12-09 15:40:45.283763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.200 #8 NEW cov: 12308 ft: 13773 corp: 6/24b lim: 10 exec/s: 0 rss: 74Mb L: 5/9 MS: 1 ChangeByte- 00:07:50.200 [2024-12-09 15:40:45.343855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000be0a cdw11:00000000 00:07:50.200 [2024-12-09 15:40:45.343884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.200 [2024-12-09 15:40:45.343935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00005858 cdw11:00000000 00:07:50.200 [2024-12-09 15:40:45.343949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.200 #9 NEW cov: 12308 ft: 13854 corp: 7/29b lim: 10 exec/s: 0 rss: 74Mb L: 5/9 MS: 1 ShuffleBytes- 00:07:50.200 [2024-12-09 15:40:45.404034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:50.200 [2024-12-09 15:40:45.404058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.200 [2024-12-09 15:40:45.404124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002b58 cdw11:00000000 00:07:50.200 [2024-12-09 15:40:45.404138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.458 #10 NEW cov: 12308 ft: 13922 corp: 8/34b lim: 10 exec/s: 0 rss: 74Mb L: 5/9 MS: 1 ChangeByte- 00:07:50.459 [2024-12-09 15:40:45.443939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000909 cdw11:00000000 00:07:50.459 [2024-12-09 15:40:45.443963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.459 #15 NEW cov: 12308 ft: 14022 corp: 9/36b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 5 ChangeBit-ChangeBit-CopyPart-CopyPart-CopyPart- 00:07:50.459 [2024-12-09 15:40:45.484068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a05 cdw11:00000000 00:07:50.459 [2024-12-09 15:40:45.484093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.459 #16 NEW cov: 12308 ft: 14053 corp: 10/38b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 ChangeByte- 00:07:50.459 [2024-12-09 15:40:45.524150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000be0a cdw11:00000000 00:07:50.459 [2024-12-09 15:40:45.524174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.459 #17 NEW cov: 12308 ft: 14123 corp: 11/41b lim: 10 exec/s: 0 rss: 74Mb L: 3/9 MS: 1 EraseBytes- 00:07:50.459 [2024-12-09 15:40:45.584453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:50.459 [2024-12-09 15:40:45.584478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.459 [2024-12-09 15:40:45.584531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002b58 cdw11:00000000 00:07:50.459 [2024-12-09 15:40:45.584545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.459 #18 NEW cov: 12308 ft: 14154 corp: 12/46b lim: 10 exec/s: 0 rss: 74Mb L: 5/9 MS: 1 ShuffleBytes- 00:07:50.459 [2024-12-09 15:40:45.644489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005809 cdw11:00000000 00:07:50.459 [2024-12-09 15:40:45.644514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.717 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:50.717 #19 NEW cov: 12331 ft: 14195 corp: 13/48b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 CrossOver- 00:07:50.717 [2024-12-09 15:40:45.704649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005809 cdw11:00000000 00:07:50.717 [2024-12-09 15:40:45.704674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.717 #20 NEW cov: 12331 ft: 14264 corp: 14/51b lim: 10 exec/s: 0 rss: 74Mb L: 3/9 MS: 1 CrossOver- 00:07:50.717 [2024-12-09 15:40:45.764828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005858 cdw11:00000000 00:07:50.717 [2024-12-09 15:40:45.764859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.717 #21 NEW cov: 12331 ft: 14267 corp: 15/53b lim: 10 exec/s: 21 rss: 74Mb L: 2/9 MS: 1 CrossOver- 00:07:50.717 [2024-12-09 15:40:45.804935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001909 cdw11:00000000 00:07:50.717 [2024-12-09 15:40:45.804960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.717 #22 NEW cov: 12331 ft: 14295 corp: 16/55b lim: 10 exec/s: 22 rss: 74Mb L: 2/9 MS: 1 ChangeBit- 00:07:50.717 [2024-12-09 15:40:45.845373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000be0a cdw11:00000000 00:07:50.717 [2024-12-09 15:40:45.845398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.717 [2024-12-09 15:40:45.845450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a58 cdw11:00000000 00:07:50.717 [2024-12-09 15:40:45.845464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.717 [2024-12-09 15:40:45.845514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00005858 cdw11:00000000 00:07:50.717 [2024-12-09 15:40:45.845528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.717 [2024-12-09 15:40:45.845578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000a58 cdw11:00000000 00:07:50.717 [2024-12-09 15:40:45.845591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.717 #23 NEW cov: 12331 ft: 14308 corp: 17/63b lim: 10 exec/s: 23 rss: 74Mb L: 8/9 MS: 1 CrossOver- 00:07:50.717 [2024-12-09 15:40:45.905536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000be0a cdw11:00000000 00:07:50.717 [2024-12-09 15:40:45.905560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.717 [2024-12-09 15:40:45.905612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a58 cdw11:00000000 00:07:50.717 [2024-12-09 15:40:45.905626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.717 [2024-12-09 15:40:45.905676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00005858 cdw11:00000000 00:07:50.717 [2024-12-09 15:40:45.905690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.717 [2024-12-09 15:40:45.905739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00005858 cdw11:00000000 00:07:50.717 [2024-12-09 15:40:45.905752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.976 #24 NEW cov: 12331 ft: 14365 corp: 18/71b lim: 10 exec/s: 24 rss: 74Mb L: 8/9 MS: 1 CopyPart- 00:07:50.976 [2024-12-09 15:40:45.965687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000dca7 cdw11:00000000 00:07:50.976 [2024-12-09 15:40:45.965713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.976 [2024-12-09 15:40:45.965765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000a7a7 cdw11:00000000 00:07:50.976 [2024-12-09 15:40:45.965782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.976 [2024-12-09 15:40:45.965832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000a7a7 cdw11:00000000 00:07:50.976 [2024-12-09 15:40:45.965854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.976 [2024-12-09 15:40:45.965922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000a7a7 cdw11:00000000 00:07:50.976 [2024-12-09 15:40:45.965936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.976 #25 NEW cov: 12331 ft: 14398 corp: 19/80b lim: 10 exec/s: 25 rss: 75Mb L: 9/9 MS: 1 ChangeByte- 00:07:50.976 [2024-12-09 15:40:46.025881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000be02 cdw11:00000000 00:07:50.976 [2024-12-09 15:40:46.025905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.976 [2024-12-09 15:40:46.025958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a58 cdw11:00000000 00:07:50.976 [2024-12-09 15:40:46.025972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.976 [2024-12-09 15:40:46.026023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00005858 cdw11:00000000 00:07:50.976 [2024-12-09 15:40:46.026036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.976 [2024-12-09 15:40:46.026088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00005858 cdw11:00000000 00:07:50.976 [2024-12-09 15:40:46.026102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.976 #26 NEW cov: 12331 ft: 14423 corp: 20/88b lim: 10 exec/s: 26 rss: 75Mb L: 8/9 MS: 1 ChangeBit- 00:07:50.976 [2024-12-09 15:40:46.085686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005a58 cdw11:00000000 00:07:50.976 [2024-12-09 15:40:46.085710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.976 #27 NEW cov: 12331 ft: 14445 corp: 21/90b lim: 10 exec/s: 27 rss: 75Mb L: 2/9 MS: 1 ChangeBit- 00:07:50.976 [2024-12-09 15:40:46.145869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a05 cdw11:00000000 00:07:50.976 [2024-12-09 15:40:46.145893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.976 #28 NEW cov: 12331 ft: 14457 corp: 22/93b lim: 10 exec/s: 28 rss: 75Mb L: 3/9 MS: 1 InsertByte- 00:07:51.235 [2024-12-09 15:40:46.206187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000be05 cdw11:00000000 00:07:51.235 [2024-12-09 15:40:46.206212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.235 [2024-12-09 15:40:46.206265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:51.235 [2024-12-09 15:40:46.206278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.235 #29 NEW cov: 12331 ft: 14463 corp: 23/98b lim: 10 exec/s: 29 rss: 75Mb L: 5/9 MS: 1 ChangeBinInt- 00:07:51.235 [2024-12-09 15:40:46.246130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009e0a cdw11:00000000 00:07:51.235 [2024-12-09 15:40:46.246156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.235 #30 NEW cov: 12331 ft: 14484 corp: 24/101b lim: 10 exec/s: 30 rss: 75Mb L: 3/9 MS: 1 ChangeBit- 00:07:51.235 [2024-12-09 15:40:46.286242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a05 cdw11:00000000 00:07:51.235 [2024-12-09 15:40:46.286268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.235 #31 NEW cov: 12331 ft: 14517 corp: 25/104b lim: 10 exec/s: 31 rss: 75Mb L: 3/9 MS: 1 ShuffleBytes- 00:07:51.235 [2024-12-09 15:40:46.346567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000be40 cdw11:00000000 00:07:51.235 [2024-12-09 15:40:46.346591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.235 [2024-12-09 15:40:46.346644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:07:51.235 [2024-12-09 15:40:46.346658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.235 #32 NEW cov: 12331 ft: 14525 corp: 26/109b lim: 10 exec/s: 32 rss: 75Mb L: 5/9 MS: 1 CMP- DE: "@\000"- 00:07:51.235 [2024-12-09 15:40:46.386879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000be02 cdw11:00000000 00:07:51.235 [2024-12-09 15:40:46.386905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.235 [2024-12-09 15:40:46.386959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a58 cdw11:00000000 00:07:51.235 [2024-12-09 15:40:46.386973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.235 [2024-12-09 15:40:46.387024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000a158 cdw11:00000000 00:07:51.235 [2024-12-09 15:40:46.387037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.235 [2024-12-09 15:40:46.387089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00005858 cdw11:00000000 00:07:51.235 [2024-12-09 15:40:46.387102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.235 #33 NEW cov: 12331 ft: 14534 corp: 27/118b lim: 10 exec/s: 33 rss: 75Mb L: 9/9 MS: 1 InsertByte- 00:07:51.235 [2024-12-09 15:40:46.446720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000d09 cdw11:00000000 00:07:51.235 [2024-12-09 15:40:46.446745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.495 #34 NEW cov: 12331 ft: 14553 corp: 28/120b lim: 10 exec/s: 34 rss: 75Mb L: 2/9 MS: 1 ChangeBit- 00:07:51.495 [2024-12-09 15:40:46.487048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000a50a cdw11:00000000 00:07:51.495 [2024-12-09 15:40:46.487073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.495 [2024-12-09 15:40:46.487126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a58 cdw11:00000000 00:07:51.495 [2024-12-09 15:40:46.487141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.495 [2024-12-09 15:40:46.487193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00005858 cdw11:00000000 00:07:51.495 [2024-12-09 15:40:46.487207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.495 #35 NEW cov: 12331 ft: 14679 corp: 29/126b lim: 10 exec/s: 35 rss: 75Mb L: 6/9 MS: 1 InsertByte- 00:07:51.495 [2024-12-09 15:40:46.527422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000be0a cdw11:00000000 00:07:51.495 [2024-12-09 15:40:46.527451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.495 [2024-12-09 15:40:46.527503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00005858 cdw11:00000000 00:07:51.495 [2024-12-09 15:40:46.527518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.495 [2024-12-09 15:40:46.527571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00005848 cdw11:00000000 00:07:51.495 [2024-12-09 15:40:46.527585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.495 [2024-12-09 15:40:46.527637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004848 cdw11:00000000 00:07:51.495 [2024-12-09 15:40:46.527667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.495 [2024-12-09 15:40:46.527720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00004848 cdw11:00000000 00:07:51.495 [2024-12-09 15:40:46.527738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.495 #36 NEW cov: 12331 ft: 14726 corp: 30/136b lim: 10 exec/s: 36 rss: 75Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:51.495 [2024-12-09 15:40:46.567275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aa5 cdw11:00000000 00:07:51.495 [2024-12-09 15:40:46.567300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.495 [2024-12-09 15:40:46.567355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000580a cdw11:00000000 00:07:51.495 [2024-12-09 15:40:46.567369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.495 [2024-12-09 15:40:46.567419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00005858 cdw11:00000000 00:07:51.495 [2024-12-09 15:40:46.567433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.495 #37 NEW cov: 12331 ft: 14752 corp: 31/142b lim: 10 exec/s: 37 rss: 75Mb L: 6/10 MS: 1 ShuffleBytes- 00:07:51.495 [2024-12-09 15:40:46.627219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002d05 cdw11:00000000 00:07:51.495 [2024-12-09 15:40:46.627244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.495 #38 NEW cov: 12331 ft: 14762 corp: 32/145b lim: 10 exec/s: 38 rss: 75Mb L: 3/10 MS: 1 ShuffleBytes- 00:07:51.495 [2024-12-09 15:40:46.687860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000d00 cdw11:00000000 00:07:51.495 [2024-12-09 15:40:46.687885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.495 [2024-12-09 15:40:46.687939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:51.495 [2024-12-09 15:40:46.687952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.495 [2024-12-09 15:40:46.688006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:51.495 [2024-12-09 15:40:46.688019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.495 [2024-12-09 15:40:46.688072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:51.495 [2024-12-09 15:40:46.688086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.495 [2024-12-09 15:40:46.688139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000009 cdw11:00000000 00:07:51.495 [2024-12-09 15:40:46.688152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.755 #39 NEW cov: 12331 ft: 14839 corp: 33/155b lim: 10 exec/s: 39 rss: 75Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:51.755 [2024-12-09 15:40:46.747676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aa5 cdw11:00000000 00:07:51.755 [2024-12-09 15:40:46.747701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.755 [2024-12-09 15:40:46.747754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000580a cdw11:00000000 00:07:51.755 [2024-12-09 15:40:46.747768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.755 #40 NEW cov: 12331 ft: 14856 corp: 34/159b lim: 10 exec/s: 20 rss: 75Mb L: 4/10 MS: 1 EraseBytes- 00:07:51.755 #40 DONE cov: 12331 ft: 14856 corp: 34/159b lim: 10 exec/s: 20 rss: 75Mb 00:07:51.755 ###### Recommended dictionary. ###### 00:07:51.755 "@\000" # Uses: 0 00:07:51.755 ###### End of recommended dictionary. ###### 00:07:51.755 Done 40 runs in 2 second(s) 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:51.755 15:40:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:07:51.755 [2024-12-09 15:40:46.952419] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:51.755 [2024-12-09 15:40:46.952494] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907529 ] 00:07:52.014 [2024-12-09 15:40:47.218033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.274 [2024-12-09 15:40:47.266030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.274 [2024-12-09 15:40:47.325069] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.274 [2024-12-09 15:40:47.341222] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:52.274 INFO: Running with entropic power schedule (0xFF, 100). 00:07:52.274 INFO: Seed: 774779219 00:07:52.274 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:07:52.274 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:07:52.274 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:52.274 INFO: A corpus is not provided, starting from an empty corpus 00:07:52.274 #2 INITED exec/s: 0 rss: 66Mb 00:07:52.274 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:52.274 This may also happen if the target rejected all inputs we tried so far 00:07:52.274 [2024-12-09 15:40:47.396661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:52.274 [2024-12-09 15:40:47.396690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.533 NEW_FUNC[1/715]: 0x447128 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:52.533 NEW_FUNC[2/715]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:52.533 #3 NEW cov: 12104 ft: 12100 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CrossOver- 00:07:52.533 [2024-12-09 15:40:47.727642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:52.533 [2024-12-09 15:40:47.727678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.533 [2024-12-09 15:40:47.727734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:52.533 [2024-12-09 15:40:47.727748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.792 #4 NEW cov: 12217 ft: 12733 corp: 3/7b lim: 10 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 CrossOver- 00:07:52.792 [2024-12-09 15:40:47.787769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000670a cdw11:00000000 00:07:52.792 [2024-12-09 15:40:47.787796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.792 [2024-12-09 15:40:47.787851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:52.792 [2024-12-09 15:40:47.787866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.792 #5 NEW cov: 12223 ft: 12921 corp: 4/11b lim: 10 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ChangeByte- 00:07:52.792 [2024-12-09 15:40:47.848227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000670a cdw11:00000000 00:07:52.792 [2024-12-09 15:40:47.848253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.792 [2024-12-09 15:40:47.848306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006161 cdw11:00000000 00:07:52.793 [2024-12-09 15:40:47.848320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.793 [2024-12-09 15:40:47.848370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00006161 cdw11:00000000 00:07:52.793 [2024-12-09 15:40:47.848384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.793 [2024-12-09 15:40:47.848438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00006161 cdw11:00000000 00:07:52.793 [2024-12-09 15:40:47.848451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.793 [2024-12-09 15:40:47.848501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:52.793 [2024-12-09 15:40:47.848515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:52.793 #6 NEW cov: 12308 ft: 13506 corp: 5/21b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:52.793 [2024-12-09 15:40:47.908039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:07:52.793 [2024-12-09 15:40:47.908064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.793 [2024-12-09 15:40:47.908116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:52.793 [2024-12-09 15:40:47.908130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.793 #7 NEW cov: 12308 ft: 13754 corp: 6/25b lim: 10 exec/s: 0 rss: 73Mb L: 4/10 MS: 1 ChangeBit- 00:07:52.793 [2024-12-09 15:40:47.948515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000670a cdw11:00000000 00:07:52.793 [2024-12-09 15:40:47.948540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.793 [2024-12-09 15:40:47.948593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006161 cdw11:00000000 00:07:52.793 [2024-12-09 15:40:47.948608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.793 [2024-12-09 15:40:47.948658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00006161 cdw11:00000000 00:07:52.793 [2024-12-09 15:40:47.948672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.793 [2024-12-09 15:40:47.948725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00006161 cdw11:00000000 00:07:52.793 [2024-12-09 15:40:47.948739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.793 [2024-12-09 15:40:47.948790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:52.793 [2024-12-09 15:40:47.948804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:52.793 #8 NEW cov: 12308 ft: 13876 corp: 7/35b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:52.793 [2024-12-09 15:40:48.008285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:52.793 [2024-12-09 15:40:48.008310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.793 [2024-12-09 15:40:48.008375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004a0a cdw11:00000000 00:07:52.793 [2024-12-09 15:40:48.008390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.052 #14 NEW cov: 12308 ft: 14030 corp: 8/39b lim: 10 exec/s: 0 rss: 74Mb L: 4/10 MS: 1 ChangeBit- 00:07:53.052 [2024-12-09 15:40:48.048648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:07:53.052 [2024-12-09 15:40:48.048673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.052 [2024-12-09 15:40:48.048729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.052 [2024-12-09 15:40:48.048743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.052 [2024-12-09 15:40:48.048792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.052 [2024-12-09 15:40:48.048806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.052 [2024-12-09 15:40:48.048860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:53.052 [2024-12-09 15:40:48.048892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.052 #15 NEW cov: 12308 ft: 14092 corp: 9/47b lim: 10 exec/s: 0 rss: 74Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:07:53.052 [2024-12-09 15:40:48.108588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:53.052 [2024-12-09 15:40:48.108614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.052 [2024-12-09 15:40:48.108664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:53.052 [2024-12-09 15:40:48.108678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.052 #16 NEW cov: 12308 ft: 14117 corp: 10/52b lim: 10 exec/s: 0 rss: 74Mb L: 5/10 MS: 1 CrossOver- 00:07:53.052 [2024-12-09 15:40:48.148916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:53.052 [2024-12-09 15:40:48.148941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.052 [2024-12-09 15:40:48.148994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000000 00:07:53.052 [2024-12-09 15:40:48.149008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.053 [2024-12-09 15:40:48.149061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.053 [2024-12-09 15:40:48.149074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.053 [2024-12-09 15:40:48.149124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:07:53.053 [2024-12-09 15:40:48.149137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.053 #17 NEW cov: 12308 ft: 14150 corp: 11/61b lim: 10 exec/s: 0 rss: 74Mb L: 9/10 MS: 1 CopyPart- 00:07:53.053 [2024-12-09 15:40:48.208912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00006b6b cdw11:00000000 00:07:53.053 [2024-12-09 15:40:48.208937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.053 [2024-12-09 15:40:48.208988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006b0a cdw11:00000000 00:07:53.053 [2024-12-09 15:40:48.209003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.053 #21 NEW cov: 12308 ft: 14233 corp: 12/65b lim: 10 exec/s: 0 rss: 74Mb L: 4/10 MS: 4 EraseBytes-ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:53.053 [2024-12-09 15:40:48.249203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000670a cdw11:00000000 00:07:53.053 [2024-12-09 15:40:48.249229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.053 [2024-12-09 15:40:48.249286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:53.053 [2024-12-09 15:40:48.249301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.053 [2024-12-09 15:40:48.249349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000aeae cdw11:00000000 00:07:53.053 [2024-12-09 15:40:48.249363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.053 [2024-12-09 15:40:48.249413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000aeae cdw11:00000000 00:07:53.053 [2024-12-09 15:40:48.249426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.053 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:53.053 #22 NEW cov: 12331 ft: 14252 corp: 13/74b lim: 10 exec/s: 0 rss: 74Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:53.312 [2024-12-09 15:40:48.289383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000098f5 cdw11:00000000 00:07:53.312 [2024-12-09 15:40:48.289409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.312 [2024-12-09 15:40:48.289461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f5f5 cdw11:00000000 00:07:53.312 [2024-12-09 15:40:48.289476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.313 [2024-12-09 15:40:48.289525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005151 cdw11:00000000 00:07:53.313 [2024-12-09 15:40:48.289539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.313 [2024-12-09 15:40:48.289589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005151 cdw11:00000000 00:07:53.313 [2024-12-09 15:40:48.289620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.313 #23 NEW cov: 12331 ft: 14276 corp: 14/83b lim: 10 exec/s: 0 rss: 74Mb L: 9/10 MS: 1 ChangeBinInt- 00:07:53.313 [2024-12-09 15:40:48.349284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:07:53.313 [2024-12-09 15:40:48.349310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.313 [2024-12-09 15:40:48.349362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004a0a cdw11:00000000 00:07:53.313 [2024-12-09 15:40:48.349376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.313 #24 NEW cov: 12331 ft: 14308 corp: 15/87b lim: 10 exec/s: 24 rss: 74Mb L: 4/10 MS: 1 ChangeBit- 00:07:53.313 [2024-12-09 15:40:48.409711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:53.313 [2024-12-09 15:40:48.409737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.313 [2024-12-09 15:40:48.409790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004a0a cdw11:00000000 00:07:53.313 [2024-12-09 15:40:48.409805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.313 [2024-12-09 15:40:48.409860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:53.313 [2024-12-09 15:40:48.409874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.313 [2024-12-09 15:40:48.409928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00004a0a cdw11:00000000 00:07:53.313 [2024-12-09 15:40:48.409942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.313 #25 NEW cov: 12331 ft: 14367 corp: 16/95b lim: 10 exec/s: 25 rss: 74Mb L: 8/10 MS: 1 CopyPart- 00:07:53.313 [2024-12-09 15:40:48.449534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:53.313 [2024-12-09 15:40:48.449559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.313 [2024-12-09 15:40:48.449610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004a0a cdw11:00000000 00:07:53.313 [2024-12-09 15:40:48.449624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.313 #26 NEW cov: 12331 ft: 14397 corp: 17/99b lim: 10 exec/s: 26 rss: 74Mb L: 4/10 MS: 1 EraseBytes- 00:07:53.313 [2024-12-09 15:40:48.509751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ca6b cdw11:00000000 00:07:53.313 [2024-12-09 15:40:48.509775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.313 [2024-12-09 15:40:48.509826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006b6b cdw11:00000000 00:07:53.313 [2024-12-09 15:40:48.509840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.572 #27 NEW cov: 12331 ft: 14420 corp: 18/104b lim: 10 exec/s: 27 rss: 74Mb L: 5/10 MS: 1 InsertByte- 00:07:53.572 [2024-12-09 15:40:48.569955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a04 cdw11:00000000 00:07:53.572 [2024-12-09 15:40:48.569980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.572 [2024-12-09 15:40:48.570034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:53.572 [2024-12-09 15:40:48.570048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.572 #28 NEW cov: 12331 ft: 14434 corp: 19/108b lim: 10 exec/s: 28 rss: 74Mb L: 4/10 MS: 1 ChangeBinInt- 00:07:53.572 [2024-12-09 15:40:48.610241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000098f5 cdw11:00000000 00:07:53.572 [2024-12-09 15:40:48.610266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.572 [2024-12-09 15:40:48.610317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f5f5 cdw11:00000000 00:07:53.572 [2024-12-09 15:40:48.610332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.572 [2024-12-09 15:40:48.610383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005151 cdw11:00000000 00:07:53.572 [2024-12-09 15:40:48.610396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.572 [2024-12-09 15:40:48.610449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005198 cdw11:00000000 00:07:53.572 [2024-12-09 15:40:48.610462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.572 #29 NEW cov: 12331 ft: 14471 corp: 20/117b lim: 10 exec/s: 29 rss: 74Mb L: 9/10 MS: 1 CopyPart- 00:07:53.572 [2024-12-09 15:40:48.670194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:07:53.572 [2024-12-09 15:40:48.670222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.572 [2024-12-09 15:40:48.670276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:53.572 [2024-12-09 15:40:48.670290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.572 #30 NEW cov: 12331 ft: 14492 corp: 21/122b lim: 10 exec/s: 30 rss: 74Mb L: 5/10 MS: 1 CrossOver- 00:07:53.572 [2024-12-09 15:40:48.710391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00006b6b cdw11:00000000 00:07:53.572 [2024-12-09 15:40:48.710417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.573 [2024-12-09 15:40:48.710470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006bff cdw11:00000000 00:07:53.573 [2024-12-09 15:40:48.710484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.573 [2024-12-09 15:40:48.710534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:53.573 [2024-12-09 15:40:48.710548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.573 #31 NEW cov: 12331 ft: 14621 corp: 22/129b lim: 10 exec/s: 31 rss: 74Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:07:53.573 [2024-12-09 15:40:48.750759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000670a cdw11:00000000 00:07:53.573 [2024-12-09 15:40:48.750783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.573 [2024-12-09 15:40:48.750834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:53.573 [2024-12-09 15:40:48.750853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.573 [2024-12-09 15:40:48.750905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000aeae cdw11:00000000 00:07:53.573 [2024-12-09 15:40:48.750918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.573 [2024-12-09 15:40:48.750970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000aeae cdw11:00000000 00:07:53.573 [2024-12-09 15:40:48.750983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.573 [2024-12-09 15:40:48.751035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000aae cdw11:00000000 00:07:53.573 [2024-12-09 15:40:48.751049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:53.573 #32 NEW cov: 12331 ft: 14632 corp: 23/139b lim: 10 exec/s: 32 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:07:53.573 [2024-12-09 15:40:48.790458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac4 cdw11:00000000 00:07:53.573 [2024-12-09 15:40:48.790483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.832 #34 NEW cov: 12331 ft: 14656 corp: 24/141b lim: 10 exec/s: 34 rss: 74Mb L: 2/10 MS: 2 CopyPart-InsertByte- 00:07:53.832 [2024-12-09 15:40:48.830877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:07:53.832 [2024-12-09 15:40:48.830902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.832 [2024-12-09 15:40:48.830972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.832 [2024-12-09 15:40:48.830992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.832 [2024-12-09 15:40:48.831045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.832 [2024-12-09 15:40:48.831059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.832 [2024-12-09 15:40:48.831109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:07:53.832 [2024-12-09 15:40:48.831123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.832 #35 NEW cov: 12331 ft: 14675 corp: 25/150b lim: 10 exec/s: 35 rss: 74Mb L: 9/10 MS: 1 CrossOver- 00:07:53.832 [2024-12-09 15:40:48.870877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.832 [2024-12-09 15:40:48.870902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.832 [2024-12-09 15:40:48.870953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.832 [2024-12-09 15:40:48.870968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.832 [2024-12-09 15:40:48.871017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:07:53.832 [2024-12-09 15:40:48.871032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.832 #36 NEW cov: 12331 ft: 14698 corp: 26/157b lim: 10 exec/s: 36 rss: 74Mb L: 7/10 MS: 1 EraseBytes- 00:07:53.832 [2024-12-09 15:40:48.930976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:53.832 [2024-12-09 15:40:48.931001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.832 [2024-12-09 15:40:48.931053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:53.832 [2024-12-09 15:40:48.931067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.832 #37 NEW cov: 12331 ft: 14730 corp: 27/161b lim: 10 exec/s: 37 rss: 74Mb L: 4/10 MS: 1 ShuffleBytes- 00:07:53.832 [2024-12-09 15:40:48.971042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:53.833 [2024-12-09 15:40:48.971067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.833 [2024-12-09 15:40:48.971120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a4a cdw11:00000000 00:07:53.833 [2024-12-09 15:40:48.971134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.833 #38 NEW cov: 12331 ft: 14743 corp: 28/165b lim: 10 exec/s: 38 rss: 74Mb L: 4/10 MS: 1 ShuffleBytes- 00:07:53.833 [2024-12-09 15:40:49.011091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000310a cdw11:00000000 00:07:53.833 [2024-12-09 15:40:49.011116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.833 #39 NEW cov: 12331 ft: 14790 corp: 29/167b lim: 10 exec/s: 39 rss: 74Mb L: 2/10 MS: 1 InsertByte- 00:07:53.833 [2024-12-09 15:40:49.051298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:53.833 [2024-12-09 15:40:49.051324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.833 [2024-12-09 15:40:49.051380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:53.833 [2024-12-09 15:40:49.051394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.092 #40 NEW cov: 12331 ft: 14875 corp: 30/171b lim: 10 exec/s: 40 rss: 74Mb L: 4/10 MS: 1 ChangeBinInt- 00:07:54.092 [2024-12-09 15:40:49.111454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:07:54.092 [2024-12-09 15:40:49.111480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.092 [2024-12-09 15:40:49.111532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004a21 cdw11:00000000 00:07:54.092 [2024-12-09 15:40:49.111546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.092 #41 NEW cov: 12331 ft: 14878 corp: 31/175b lim: 10 exec/s: 41 rss: 75Mb L: 4/10 MS: 1 ChangeByte- 00:07:54.092 [2024-12-09 15:40:49.172014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000670a cdw11:00000000 00:07:54.092 [2024-12-09 15:40:49.172040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.092 [2024-12-09 15:40:49.172093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006161 cdw11:00000000 00:07:54.092 [2024-12-09 15:40:49.172107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.092 [2024-12-09 15:40:49.172160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f700 cdw11:00000000 00:07:54.092 [2024-12-09 15:40:49.172173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.092 [2024-12-09 15:40:49.172225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:54.092 [2024-12-09 15:40:49.172239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.092 [2024-12-09 15:40:49.172290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:54.092 [2024-12-09 15:40:49.172304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.092 #42 NEW cov: 12331 ft: 14908 corp: 32/185b lim: 10 exec/s: 42 rss: 75Mb L: 10/10 MS: 1 CMP- DE: "\367\000\000\000"- 00:07:54.092 [2024-12-09 15:40:49.232096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:54.092 [2024-12-09 15:40:49.232121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.092 [2024-12-09 15:40:49.232174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000af7 cdw11:00000000 00:07:54.092 [2024-12-09 15:40:49.232189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.092 [2024-12-09 15:40:49.232242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:54.092 [2024-12-09 15:40:49.232255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.092 [2024-12-09 15:40:49.232307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:54.092 [2024-12-09 15:40:49.232321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.092 #43 NEW cov: 12331 ft: 14923 corp: 33/193b lim: 10 exec/s: 43 rss: 75Mb L: 8/10 MS: 1 PersAutoDict- DE: "\367\000\000\000"- 00:07:54.092 [2024-12-09 15:40:49.292023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:07:54.092 [2024-12-09 15:40:49.292048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.092 [2024-12-09 15:40:49.292103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a09 cdw11:00000000 00:07:54.092 [2024-12-09 15:40:49.292117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.352 [2024-12-09 15:40:49.352147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:54.352 [2024-12-09 15:40:49.352172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.352 [2024-12-09 15:40:49.352227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000902 cdw11:00000000 00:07:54.352 [2024-12-09 15:40:49.352241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.352 #45 NEW cov: 12331 ft: 14931 corp: 34/198b lim: 10 exec/s: 45 rss: 75Mb L: 5/10 MS: 2 ChangeBinInt-ShuffleBytes- 00:07:54.352 [2024-12-09 15:40:49.392239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:54.352 [2024-12-09 15:40:49.392263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.352 [2024-12-09 15:40:49.392330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000480a cdw11:00000000 00:07:54.352 [2024-12-09 15:40:49.392345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.352 #46 NEW cov: 12331 ft: 14942 corp: 35/202b lim: 10 exec/s: 23 rss: 75Mb L: 4/10 MS: 1 ChangeBit- 00:07:54.352 #46 DONE cov: 12331 ft: 14942 corp: 35/202b lim: 10 exec/s: 23 rss: 75Mb 00:07:54.352 ###### Recommended dictionary. ###### 00:07:54.352 "\367\000\000\000" # Uses: 1 00:07:54.352 ###### End of recommended dictionary. ###### 00:07:54.352 Done 46 runs in 2 second(s) 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:54.352 15:40:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:54.352 [2024-12-09 15:40:49.573369] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:54.352 [2024-12-09 15:40:49.573441] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907894 ] 00:07:54.921 [2024-12-09 15:40:49.841028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.921 [2024-12-09 15:40:49.895242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.921 [2024-12-09 15:40:49.954164] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.921 [2024-12-09 15:40:49.970305] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:54.921 INFO: Running with entropic power schedule (0xFF, 100). 00:07:54.921 INFO: Seed: 3403793061 00:07:54.921 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:07:54.921 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:07:54.921 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:54.921 INFO: A corpus is not provided, starting from an empty corpus 00:07:54.921 [2024-12-09 15:40:50.015864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.921 [2024-12-09 15:40:50.015899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.921 #2 INITED cov: 12131 ft: 12095 corp: 1/1b exec/s: 0 rss: 72Mb 00:07:54.921 [2024-12-09 15:40:50.055965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.921 [2024-12-09 15:40:50.055996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.921 [2024-12-09 15:40:50.056051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.921 [2024-12-09 15:40:50.056066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.921 #3 NEW cov: 12244 ft: 13476 corp: 2/3b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 InsertByte- 00:07:54.921 [2024-12-09 15:40:50.115955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.921 [2024-12-09 15:40:50.115983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.921 #4 NEW cov: 12250 ft: 13689 corp: 3/4b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 CrossOver- 00:07:55.181 [2024-12-09 15:40:50.156030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.181 [2024-12-09 15:40:50.156056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.181 #5 NEW cov: 12335 ft: 14009 corp: 4/5b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 ShuffleBytes- 00:07:55.181 [2024-12-09 15:40:50.216331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.181 [2024-12-09 15:40:50.216360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.181 [2024-12-09 15:40:50.216415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.181 [2024-12-09 15:40:50.216429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.181 #6 NEW cov: 12335 ft: 14103 corp: 5/7b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeBinInt- 00:07:55.181 [2024-12-09 15:40:50.276492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.181 [2024-12-09 15:40:50.276519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.181 [2024-12-09 15:40:50.276588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.181 [2024-12-09 15:40:50.276603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.181 #7 NEW cov: 12335 ft: 14217 corp: 6/9b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 InsertByte- 00:07:55.181 [2024-12-09 15:40:50.317138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.181 [2024-12-09 15:40:50.317164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.181 [2024-12-09 15:40:50.317220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.181 [2024-12-09 15:40:50.317234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.181 [2024-12-09 15:40:50.317288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.181 [2024-12-09 15:40:50.317302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.181 [2024-12-09 15:40:50.317356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.181 [2024-12-09 15:40:50.317369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.181 [2024-12-09 15:40:50.317424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.181 [2024-12-09 15:40:50.317437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.181 #8 NEW cov: 12335 ft: 14624 corp: 7/14b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CMP- DE: "\000\000\000\001"- 00:07:55.181 [2024-12-09 15:40:50.376927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.181 [2024-12-09 15:40:50.376952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.181 [2024-12-09 15:40:50.377023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.181 [2024-12-09 15:40:50.377039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.181 [2024-12-09 15:40:50.377090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.181 [2024-12-09 15:40:50.377107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.441 #9 NEW cov: 12335 ft: 14788 corp: 8/17b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:07:55.441 [2024-12-09 15:40:50.436792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.441 [2024-12-09 15:40:50.436817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.441 #10 NEW cov: 12335 ft: 14823 corp: 9/18b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 EraseBytes- 00:07:55.441 [2024-12-09 15:40:50.497141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.441 [2024-12-09 15:40:50.497167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.441 [2024-12-09 15:40:50.497222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.441 [2024-12-09 15:40:50.497236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.441 #11 NEW cov: 12335 ft: 14885 corp: 10/20b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 ChangeBinInt- 00:07:55.441 [2024-12-09 15:40:50.537435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.441 [2024-12-09 15:40:50.537460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.441 [2024-12-09 15:40:50.537529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.441 [2024-12-09 15:40:50.537544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.441 [2024-12-09 15:40:50.537595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.441 [2024-12-09 15:40:50.537608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.441 #12 NEW cov: 12335 ft: 14935 corp: 11/23b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 CopyPart- 00:07:55.441 [2024-12-09 15:40:50.577402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.441 [2024-12-09 15:40:50.577427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.441 [2024-12-09 15:40:50.577496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.441 [2024-12-09 15:40:50.577511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.441 #13 NEW cov: 12335 ft: 14960 corp: 12/25b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:07:55.441 [2024-12-09 15:40:50.617365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.441 [2024-12-09 15:40:50.617389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.441 #14 NEW cov: 12335 ft: 15035 corp: 13/26b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:07:55.441 [2024-12-09 15:40:50.657734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.441 [2024-12-09 15:40:50.657761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.441 [2024-12-09 15:40:50.657831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.441 [2024-12-09 15:40:50.657852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.441 [2024-12-09 15:40:50.657906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.441 [2024-12-09 15:40:50.657919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.701 #15 NEW cov: 12335 ft: 15063 corp: 14/29b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:07:55.701 [2024-12-09 15:40:50.717732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.701 [2024-12-09 15:40:50.717757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.701 [2024-12-09 15:40:50.717826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.701 [2024-12-09 15:40:50.717841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.701 #16 NEW cov: 12344 ft: 15115 corp: 15/31b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 ChangeBinInt- 00:07:55.701 [2024-12-09 15:40:50.757859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.701 [2024-12-09 15:40:50.757884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.701 [2024-12-09 15:40:50.757953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.701 [2024-12-09 15:40:50.757968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.701 #17 NEW cov: 12344 ft: 15139 corp: 16/33b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:55.701 [2024-12-09 15:40:50.818001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.701 [2024-12-09 15:40:50.818026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.701 [2024-12-09 15:40:50.818095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.701 [2024-12-09 15:40:50.818110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.701 #18 NEW cov: 12344 ft: 15147 corp: 17/35b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 ChangeBinInt- 00:07:55.701 [2024-12-09 15:40:50.858146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.701 [2024-12-09 15:40:50.858170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.701 [2024-12-09 15:40:50.858240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.701 [2024-12-09 15:40:50.858255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.701 #19 NEW cov: 12344 ft: 15152 corp: 18/37b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 ChangeBit- 00:07:55.701 [2024-12-09 15:40:50.898200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.701 [2024-12-09 15:40:50.898224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.701 [2024-12-09 15:40:50.898277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.701 [2024-12-09 15:40:50.898291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.270 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:56.270 #20 NEW cov: 12367 ft: 15218 corp: 19/39b lim: 5 exec/s: 20 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:07:56.270 [2024-12-09 15:40:51.209563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.270 [2024-12-09 15:40:51.209602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.270 [2024-12-09 15:40:51.209678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.270 [2024-12-09 15:40:51.209694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.270 [2024-12-09 15:40:51.209754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.270 [2024-12-09 15:40:51.209768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.270 [2024-12-09 15:40:51.209826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.270 [2024-12-09 15:40:51.209840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.270 #21 NEW cov: 12367 ft: 15234 corp: 20/43b lim: 5 exec/s: 21 rss: 74Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:07:56.270 [2024-12-09 15:40:51.269259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.270 [2024-12-09 15:40:51.269285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.270 [2024-12-09 15:40:51.269358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.270 [2024-12-09 15:40:51.269373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.270 #22 NEW cov: 12367 ft: 15256 corp: 21/45b lim: 5 exec/s: 22 rss: 74Mb L: 2/5 MS: 1 ChangeBit- 00:07:56.270 [2024-12-09 15:40:51.309357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.270 [2024-12-09 15:40:51.309383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.270 [2024-12-09 15:40:51.309456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.270 [2024-12-09 15:40:51.309471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.270 #23 NEW cov: 12367 ft: 15285 corp: 22/47b lim: 5 exec/s: 23 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:07:56.270 [2024-12-09 15:40:51.349778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.270 [2024-12-09 15:40:51.349803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.270 [2024-12-09 15:40:51.349865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.270 [2024-12-09 15:40:51.349880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.270 [2024-12-09 15:40:51.349939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.270 [2024-12-09 15:40:51.349952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.270 [2024-12-09 15:40:51.350006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.270 [2024-12-09 15:40:51.350020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.270 #24 NEW cov: 12367 ft: 15288 corp: 23/51b lim: 5 exec/s: 24 rss: 74Mb L: 4/5 MS: 1 PersAutoDict- DE: "\000\000\000\001"- 00:07:56.270 [2024-12-09 15:40:51.409617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.270 [2024-12-09 15:40:51.409643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.270 [2024-12-09 15:40:51.409699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.270 [2024-12-09 15:40:51.409714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.270 #25 NEW cov: 12367 ft: 15330 corp: 24/53b lim: 5 exec/s: 25 rss: 74Mb L: 2/5 MS: 1 CrossOver- 00:07:56.270 [2024-12-09 15:40:51.449879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.270 [2024-12-09 15:40:51.449904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.271 [2024-12-09 15:40:51.449978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.271 [2024-12-09 15:40:51.449993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.271 [2024-12-09 15:40:51.450050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.271 [2024-12-09 15:40:51.450064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.271 #26 NEW cov: 12367 ft: 15334 corp: 25/56b lim: 5 exec/s: 26 rss: 74Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:56.529 [2024-12-09 15:40:51.510063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.529 [2024-12-09 15:40:51.510088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.529 [2024-12-09 15:40:51.510164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.529 [2024-12-09 15:40:51.510182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.529 [2024-12-09 15:40:51.510238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.529 [2024-12-09 15:40:51.510253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.529 #27 NEW cov: 12367 ft: 15362 corp: 26/59b lim: 5 exec/s: 27 rss: 74Mb L: 3/5 MS: 1 CrossOver- 00:07:56.529 [2024-12-09 15:40:51.550468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.529 [2024-12-09 15:40:51.550493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.529 [2024-12-09 15:40:51.550569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.529 [2024-12-09 15:40:51.550584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.529 [2024-12-09 15:40:51.550643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.530 [2024-12-09 15:40:51.550657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.530 [2024-12-09 15:40:51.550717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.530 [2024-12-09 15:40:51.550731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.530 [2024-12-09 15:40:51.550787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.530 [2024-12-09 15:40:51.550801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.530 #28 NEW cov: 12367 ft: 15391 corp: 27/64b lim: 5 exec/s: 28 rss: 74Mb L: 5/5 MS: 1 CopyPart- 00:07:56.530 [2024-12-09 15:40:51.610184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.530 [2024-12-09 15:40:51.610210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.530 [2024-12-09 15:40:51.610267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.530 [2024-12-09 15:40:51.610282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.530 #29 NEW cov: 12367 ft: 15416 corp: 28/66b lim: 5 exec/s: 29 rss: 74Mb L: 2/5 MS: 1 ChangeBit- 00:07:56.530 [2024-12-09 15:40:51.670333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.530 [2024-12-09 15:40:51.670360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.530 [2024-12-09 15:40:51.670418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.530 [2024-12-09 15:40:51.670433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.530 #30 NEW cov: 12367 ft: 15436 corp: 29/68b lim: 5 exec/s: 30 rss: 74Mb L: 2/5 MS: 1 ChangeByte- 00:07:56.530 [2024-12-09 15:40:51.710596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.530 [2024-12-09 15:40:51.710622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.530 [2024-12-09 15:40:51.710696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.530 [2024-12-09 15:40:51.710712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.530 [2024-12-09 15:40:51.710770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.530 [2024-12-09 15:40:51.710784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.530 #31 NEW cov: 12367 ft: 15439 corp: 30/71b lim: 5 exec/s: 31 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:07:56.789 [2024-12-09 15:40:51.770771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.789 [2024-12-09 15:40:51.770796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.789 [2024-12-09 15:40:51.770874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.789 [2024-12-09 15:40:51.770890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.789 [2024-12-09 15:40:51.770948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.789 [2024-12-09 15:40:51.770962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.789 #32 NEW cov: 12367 ft: 15445 corp: 31/74b lim: 5 exec/s: 32 rss: 75Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:56.789 [2024-12-09 15:40:51.831268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.789 [2024-12-09 15:40:51.831294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.789 [2024-12-09 15:40:51.831370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.789 [2024-12-09 15:40:51.831386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.789 [2024-12-09 15:40:51.831443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.789 [2024-12-09 15:40:51.831457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.789 [2024-12-09 15:40:51.831513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.789 [2024-12-09 15:40:51.831528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.789 [2024-12-09 15:40:51.831586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.789 [2024-12-09 15:40:51.831600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.789 #33 NEW cov: 12367 ft: 15452 corp: 32/79b lim: 5 exec/s: 33 rss: 75Mb L: 5/5 MS: 1 PersAutoDict- DE: "\000\000\000\001"- 00:07:56.789 [2024-12-09 15:40:51.890794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.789 [2024-12-09 15:40:51.890819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.789 #34 NEW cov: 12367 ft: 15468 corp: 33/80b lim: 5 exec/s: 34 rss: 75Mb L: 1/5 MS: 1 EraseBytes- 00:07:56.789 [2024-12-09 15:40:51.931051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.789 [2024-12-09 15:40:51.931076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.789 [2024-12-09 15:40:51.931135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.789 [2024-12-09 15:40:51.931149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.789 #35 NEW cov: 12367 ft: 15476 corp: 34/82b lim: 5 exec/s: 35 rss: 75Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:56.789 [2024-12-09 15:40:51.971148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.789 [2024-12-09 15:40:51.971174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.789 [2024-12-09 15:40:51.971233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.789 [2024-12-09 15:40:51.971247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.789 #36 NEW cov: 12367 ft: 15492 corp: 35/84b lim: 5 exec/s: 18 rss: 75Mb L: 2/5 MS: 1 EraseBytes- 00:07:56.789 #36 DONE cov: 12367 ft: 15492 corp: 35/84b lim: 5 exec/s: 18 rss: 75Mb 00:07:56.789 ###### Recommended dictionary. ###### 00:07:56.789 "\000\000\000\001" # Uses: 2 00:07:56.789 ###### End of recommended dictionary. ###### 00:07:56.789 Done 36 runs in 2 second(s) 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:57.049 15:40:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:07:57.049 [2024-12-09 15:40:52.164971] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:57.049 [2024-12-09 15:40:52.165040] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908247 ] 00:07:57.309 [2024-12-09 15:40:52.436018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.309 [2024-12-09 15:40:52.487316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.568 [2024-12-09 15:40:52.546307] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.568 [2024-12-09 15:40:52.562448] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:57.568 INFO: Running with entropic power schedule (0xFF, 100). 00:07:57.568 INFO: Seed: 1700827615 00:07:57.568 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:07:57.568 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:07:57.568 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:57.568 INFO: A corpus is not provided, starting from an empty corpus 00:07:57.568 [2024-12-09 15:40:52.607978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.568 [2024-12-09 15:40:52.608008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.568 #2 INITED cov: 12131 ft: 12105 corp: 1/1b exec/s: 0 rss: 73Mb 00:07:57.568 [2024-12-09 15:40:52.648002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.568 [2024-12-09 15:40:52.648028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.568 #3 NEW cov: 12244 ft: 12773 corp: 2/2b lim: 5 exec/s: 0 rss: 73Mb L: 1/1 MS: 1 ChangeBit- 00:07:57.568 [2024-12-09 15:40:52.708149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.568 [2024-12-09 15:40:52.708175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.568 #4 NEW cov: 12250 ft: 13089 corp: 3/3b lim: 5 exec/s: 0 rss: 73Mb L: 1/1 MS: 1 ChangeBit- 00:07:57.568 [2024-12-09 15:40:52.768440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.568 [2024-12-09 15:40:52.768465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.568 [2024-12-09 15:40:52.768537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.568 [2024-12-09 15:40:52.768552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.828 #5 NEW cov: 12335 ft: 13958 corp: 4/5b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CopyPart- 00:07:57.828 [2024-12-09 15:40:52.828911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.828 [2024-12-09 15:40:52.828939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.828 [2024-12-09 15:40:52.828998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.828 [2024-12-09 15:40:52.829012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.828 [2024-12-09 15:40:52.829067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.828 [2024-12-09 15:40:52.829081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.828 [2024-12-09 15:40:52.829136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.828 [2024-12-09 15:40:52.829150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.828 #6 NEW cov: 12335 ft: 14337 corp: 5/9b lim: 5 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:57.828 [2024-12-09 15:40:52.868547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.828 [2024-12-09 15:40:52.868572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.828 #7 NEW cov: 12335 ft: 14464 corp: 6/10b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 ShuffleBytes- 00:07:57.828 [2024-12-09 15:40:52.909312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.828 [2024-12-09 15:40:52.909338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.828 [2024-12-09 15:40:52.909411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.829 [2024-12-09 15:40:52.909426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.829 [2024-12-09 15:40:52.909480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.829 [2024-12-09 15:40:52.909494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.829 [2024-12-09 15:40:52.909548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.829 [2024-12-09 15:40:52.909562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.829 [2024-12-09 15:40:52.909617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.829 [2024-12-09 15:40:52.909632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.829 #8 NEW cov: 12335 ft: 14578 corp: 7/15b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:07:57.829 [2024-12-09 15:40:52.968841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.829 [2024-12-09 15:40:52.968871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.829 #9 NEW cov: 12335 ft: 14631 corp: 8/16b lim: 5 exec/s: 0 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:57.829 [2024-12-09 15:40:53.008979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.829 [2024-12-09 15:40:53.009010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.829 #10 NEW cov: 12335 ft: 14655 corp: 9/17b lim: 5 exec/s: 0 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:07:57.829 [2024-12-09 15:40:53.049278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.829 [2024-12-09 15:40:53.049304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.829 [2024-12-09 15:40:53.049363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.829 [2024-12-09 15:40:53.049378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.088 #11 NEW cov: 12335 ft: 14690 corp: 10/19b lim: 5 exec/s: 0 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:07:58.088 [2024-12-09 15:40:53.109410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.088 [2024-12-09 15:40:53.109436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.088 [2024-12-09 15:40:53.109493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.088 [2024-12-09 15:40:53.109508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.088 #12 NEW cov: 12335 ft: 14723 corp: 11/21b lim: 5 exec/s: 0 rss: 74Mb L: 2/5 MS: 1 CrossOver- 00:07:58.088 [2024-12-09 15:40:53.169671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.088 [2024-12-09 15:40:53.169716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.088 [2024-12-09 15:40:53.169778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.088 [2024-12-09 15:40:53.169794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.088 #13 NEW cov: 12335 ft: 14776 corp: 12/23b lim: 5 exec/s: 0 rss: 74Mb L: 2/5 MS: 1 ChangeBit- 00:07:58.088 [2024-12-09 15:40:53.239899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.088 [2024-12-09 15:40:53.239927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.088 [2024-12-09 15:40:53.239987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.088 [2024-12-09 15:40:53.240003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.088 #14 NEW cov: 12335 ft: 14838 corp: 13/25b lim: 5 exec/s: 0 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:07:58.088 [2024-12-09 15:40:53.280022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.088 [2024-12-09 15:40:53.280049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.089 [2024-12-09 15:40:53.280120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.089 [2024-12-09 15:40:53.280138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.089 [2024-12-09 15:40:53.280196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.089 [2024-12-09 15:40:53.280211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.348 #15 NEW cov: 12335 ft: 15043 corp: 14/28b lim: 5 exec/s: 0 rss: 74Mb L: 3/5 MS: 1 CrossOver- 00:07:58.348 [2024-12-09 15:40:53.339893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.348 [2024-12-09 15:40:53.339924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.348 #16 NEW cov: 12335 ft: 15086 corp: 15/29b lim: 5 exec/s: 0 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:07:58.348 [2024-12-09 15:40:53.400381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.348 [2024-12-09 15:40:53.400406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.348 [2024-12-09 15:40:53.400480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.348 [2024-12-09 15:40:53.400495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.348 [2024-12-09 15:40:53.400552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.348 [2024-12-09 15:40:53.400566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.348 #17 NEW cov: 12335 ft: 15126 corp: 16/32b lim: 5 exec/s: 0 rss: 74Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:58.348 [2024-12-09 15:40:53.460380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.348 [2024-12-09 15:40:53.460404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.348 [2024-12-09 15:40:53.460477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.348 [2024-12-09 15:40:53.460492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.348 #18 NEW cov: 12335 ft: 15229 corp: 17/34b lim: 5 exec/s: 0 rss: 74Mb L: 2/5 MS: 1 ChangeBit- 00:07:58.348 [2024-12-09 15:40:53.500317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.348 [2024-12-09 15:40:53.500341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.608 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:58.608 #19 NEW cov: 12358 ft: 15268 corp: 18/35b lim: 5 exec/s: 19 rss: 75Mb L: 1/5 MS: 1 EraseBytes- 00:07:58.608 [2024-12-09 15:40:53.821345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.608 [2024-12-09 15:40:53.821381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.608 [2024-12-09 15:40:53.821439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.608 [2024-12-09 15:40:53.821457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.868 #20 NEW cov: 12358 ft: 15274 corp: 19/37b lim: 5 exec/s: 20 rss: 75Mb L: 2/5 MS: 1 CopyPart- 00:07:58.868 [2024-12-09 15:40:53.861220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.868 [2024-12-09 15:40:53.861247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.868 #21 NEW cov: 12358 ft: 15303 corp: 20/38b lim: 5 exec/s: 21 rss: 75Mb L: 1/5 MS: 1 ChangeByte- 00:07:58.868 [2024-12-09 15:40:53.901317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.868 [2024-12-09 15:40:53.901342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.868 #22 NEW cov: 12358 ft: 15321 corp: 21/39b lim: 5 exec/s: 22 rss: 75Mb L: 1/5 MS: 1 ChangeByte- 00:07:58.868 [2024-12-09 15:40:53.941398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.868 [2024-12-09 15:40:53.941423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.868 #23 NEW cov: 12358 ft: 15352 corp: 22/40b lim: 5 exec/s: 23 rss: 75Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:58.868 [2024-12-09 15:40:54.001884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.868 [2024-12-09 15:40:54.001910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.868 [2024-12-09 15:40:54.001967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.868 [2024-12-09 15:40:54.001982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.868 [2024-12-09 15:40:54.002036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.868 [2024-12-09 15:40:54.002049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.868 #24 NEW cov: 12358 ft: 15431 corp: 23/43b lim: 5 exec/s: 24 rss: 75Mb L: 3/5 MS: 1 InsertByte- 00:07:58.868 [2024-12-09 15:40:54.062377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.868 [2024-12-09 15:40:54.062403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.868 [2024-12-09 15:40:54.062472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.868 [2024-12-09 15:40:54.062487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.868 [2024-12-09 15:40:54.062541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.868 [2024-12-09 15:40:54.062556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.868 [2024-12-09 15:40:54.062610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.868 [2024-12-09 15:40:54.062627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.868 [2024-12-09 15:40:54.062679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.868 [2024-12-09 15:40:54.062693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:58.868 #25 NEW cov: 12358 ft: 15470 corp: 24/48b lim: 5 exec/s: 25 rss: 75Mb L: 5/5 MS: 1 InsertByte- 00:07:59.127 [2024-12-09 15:40:54.102038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.127 [2024-12-09 15:40:54.102062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.127 [2024-12-09 15:40:54.102132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.127 [2024-12-09 15:40:54.102147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.127 #26 NEW cov: 12358 ft: 15483 corp: 25/50b lim: 5 exec/s: 26 rss: 75Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:59.127 [2024-12-09 15:40:54.141996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.127 [2024-12-09 15:40:54.142021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.127 #27 NEW cov: 12358 ft: 15487 corp: 26/51b lim: 5 exec/s: 27 rss: 75Mb L: 1/5 MS: 1 ChangeByte- 00:07:59.127 [2024-12-09 15:40:54.182216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.127 [2024-12-09 15:40:54.182241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.127 [2024-12-09 15:40:54.182295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.127 [2024-12-09 15:40:54.182309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.127 #28 NEW cov: 12358 ft: 15514 corp: 27/53b lim: 5 exec/s: 28 rss: 75Mb L: 2/5 MS: 1 ChangeBinInt- 00:07:59.127 [2024-12-09 15:40:54.242399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.127 [2024-12-09 15:40:54.242423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.127 [2024-12-09 15:40:54.242495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.127 [2024-12-09 15:40:54.242509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.127 #29 NEW cov: 12358 ft: 15559 corp: 28/55b lim: 5 exec/s: 29 rss: 75Mb L: 2/5 MS: 1 ChangeByte- 00:07:59.127 [2024-12-09 15:40:54.282513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.127 [2024-12-09 15:40:54.282539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.127 [2024-12-09 15:40:54.282596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.127 [2024-12-09 15:40:54.282611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.127 #30 NEW cov: 12358 ft: 15578 corp: 29/57b lim: 5 exec/s: 30 rss: 75Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:59.127 [2024-12-09 15:40:54.322463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.127 [2024-12-09 15:40:54.322489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.387 #31 NEW cov: 12358 ft: 15587 corp: 30/58b lim: 5 exec/s: 31 rss: 75Mb L: 1/5 MS: 1 EraseBytes- 00:07:59.387 [2024-12-09 15:40:54.382825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.387 [2024-12-09 15:40:54.382858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.387 [2024-12-09 15:40:54.382915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.387 [2024-12-09 15:40:54.382929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.387 #32 NEW cov: 12358 ft: 15593 corp: 31/60b lim: 5 exec/s: 32 rss: 76Mb L: 2/5 MS: 1 CopyPart- 00:07:59.387 [2024-12-09 15:40:54.443043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.387 [2024-12-09 15:40:54.443068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.387 [2024-12-09 15:40:54.443125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.387 [2024-12-09 15:40:54.443139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.387 #33 NEW cov: 12358 ft: 15608 corp: 32/62b lim: 5 exec/s: 33 rss: 76Mb L: 2/5 MS: 1 CopyPart- 00:07:59.387 [2024-12-09 15:40:54.503540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.387 [2024-12-09 15:40:54.503568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.387 [2024-12-09 15:40:54.503628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.387 [2024-12-09 15:40:54.503643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.387 [2024-12-09 15:40:54.503701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.387 [2024-12-09 15:40:54.503716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.387 [2024-12-09 15:40:54.503773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.387 [2024-12-09 15:40:54.503788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.387 #34 NEW cov: 12358 ft: 15635 corp: 33/66b lim: 5 exec/s: 34 rss: 76Mb L: 4/5 MS: 1 InsertByte- 00:07:59.387 [2024-12-09 15:40:54.563349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.387 [2024-12-09 15:40:54.563375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.387 [2024-12-09 15:40:54.563433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.387 [2024-12-09 15:40:54.563448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.387 #35 NEW cov: 12358 ft: 15642 corp: 34/68b lim: 5 exec/s: 17 rss: 76Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:59.387 #35 DONE cov: 12358 ft: 15642 corp: 34/68b lim: 5 exec/s: 17 rss: 76Mb 00:07:59.388 Done 35 runs in 2 second(s) 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:59.647 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:59.648 15:40:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:59.648 [2024-12-09 15:40:54.767791] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:59.648 [2024-12-09 15:40:54.767873] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908606 ] 00:07:59.906 [2024-12-09 15:40:55.038546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.906 [2024-12-09 15:40:55.092992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.165 [2024-12-09 15:40:55.151930] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.165 [2024-12-09 15:40:55.168074] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:08:00.165 INFO: Running with entropic power schedule (0xFF, 100). 00:08:00.165 INFO: Seed: 10865786 00:08:00.165 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:08:00.165 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:08:00.165 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:00.165 INFO: A corpus is not provided, starting from an empty corpus 00:08:00.165 #2 INITED exec/s: 0 rss: 66Mb 00:08:00.165 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:00.165 This may also happen if the target rejected all inputs we tried so far 00:08:00.165 [2024-12-09 15:40:55.217310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1212120a cdw11:00000114 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.165 [2024-12-09 15:40:55.217338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.424 NEW_FUNC[1/716]: 0x448aa8 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:08:00.424 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:00.424 #11 NEW cov: 12149 ft: 12144 corp: 2/9b lim: 40 exec/s: 0 rss: 73Mb L: 8/8 MS: 4 CopyPart-ShuffleBytes-CMP-InsertRepeatedBytes- DE: "\000\000\001\024"- 00:08:00.424 [2024-12-09 15:40:55.548772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.424 [2024-12-09 15:40:55.548812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.424 [2024-12-09 15:40:55.548877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.424 [2024-12-09 15:40:55.548891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.424 [2024-12-09 15:40:55.548952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.424 [2024-12-09 15:40:55.548966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.424 [2024-12-09 15:40:55.549026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.424 [2024-12-09 15:40:55.549039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.424 [2024-12-09 15:40:55.549099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.424 [2024-12-09 15:40:55.549113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:00.424 #17 NEW cov: 12267 ft: 13185 corp: 3/49b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:08:00.424 [2024-12-09 15:40:55.588262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1212120a cdw11:00100114 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.424 [2024-12-09 15:40:55.588289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.424 #18 NEW cov: 12273 ft: 13467 corp: 4/57b lim: 40 exec/s: 0 rss: 73Mb L: 8/40 MS: 1 ChangeBit- 00:08:00.424 [2024-12-09 15:40:55.648455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1212124a cdw11:00000114 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.424 [2024-12-09 15:40:55.648482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.683 #19 NEW cov: 12358 ft: 13830 corp: 5/65b lim: 40 exec/s: 0 rss: 73Mb L: 8/40 MS: 1 ChangeBit- 00:08:00.683 [2024-12-09 15:40:55.688918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.683 [2024-12-09 15:40:55.688944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.683 [2024-12-09 15:40:55.689023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.683 [2024-12-09 15:40:55.689038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.683 [2024-12-09 15:40:55.689096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.683 [2024-12-09 15:40:55.689109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.683 [2024-12-09 15:40:55.689170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.683 [2024-12-09 15:40:55.689184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.683 #20 NEW cov: 12358 ft: 13953 corp: 6/103b lim: 40 exec/s: 0 rss: 73Mb L: 38/40 MS: 1 EraseBytes- 00:08:00.683 [2024-12-09 15:40:55.748704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.683 [2024-12-09 15:40:55.748730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.683 #22 NEW cov: 12358 ft: 14104 corp: 7/112b lim: 40 exec/s: 0 rss: 73Mb L: 9/40 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:00.683 [2024-12-09 15:40:55.789099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.683 [2024-12-09 15:40:55.789125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.683 [2024-12-09 15:40:55.789202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.683 [2024-12-09 15:40:55.789217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.683 [2024-12-09 15:40:55.789275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ff121212 cdw11:0a000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.683 [2024-12-09 15:40:55.789289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.683 #23 NEW cov: 12358 ft: 14419 corp: 8/137b lim: 40 exec/s: 0 rss: 73Mb L: 25/40 MS: 1 InsertRepeatedBytes- 00:08:00.683 [2024-12-09 15:40:55.828962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1212c40a cdw11:00100114 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.683 [2024-12-09 15:40:55.828987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.683 #24 NEW cov: 12358 ft: 14459 corp: 9/145b lim: 40 exec/s: 0 rss: 74Mb L: 8/40 MS: 1 ChangeByte- 00:08:00.683 [2024-12-09 15:40:55.889111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12121212 cdw11:4a000114 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.683 [2024-12-09 15:40:55.889135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.942 #25 NEW cov: 12358 ft: 14500 corp: 10/153b lim: 40 exec/s: 0 rss: 74Mb L: 8/40 MS: 1 CopyPart- 00:08:00.943 [2024-12-09 15:40:55.949265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12424242 cdw11:12120a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.943 [2024-12-09 15:40:55.949291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.943 #26 NEW cov: 12358 ft: 14556 corp: 11/164b lim: 40 exec/s: 0 rss: 74Mb L: 11/40 MS: 1 InsertRepeatedBytes- 00:08:00.943 [2024-12-09 15:40:55.989375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9f121212 cdw11:124a0001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.943 [2024-12-09 15:40:55.989400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.943 #27 NEW cov: 12358 ft: 14573 corp: 12/173b lim: 40 exec/s: 0 rss: 74Mb L: 9/40 MS: 1 InsertByte- 00:08:00.943 [2024-12-09 15:40:56.049547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.943 [2024-12-09 15:40:56.049571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.943 #32 NEW cov: 12358 ft: 14580 corp: 13/184b lim: 40 exec/s: 0 rss: 74Mb L: 11/40 MS: 5 ChangeByte-InsertByte-EraseBytes-CopyPart-InsertRepeatedBytes- 00:08:00.943 [2024-12-09 15:40:56.089647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12121212 cdw11:4a000104 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.943 [2024-12-09 15:40:56.089672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.943 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:00.943 #33 NEW cov: 12381 ft: 14627 corp: 14/192b lim: 40 exec/s: 0 rss: 74Mb L: 8/40 MS: 1 ChangeBit- 00:08:00.943 [2024-12-09 15:40:56.130039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.943 [2024-12-09 15:40:56.130064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.943 [2024-12-09 15:40:56.130126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fffffeff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.943 [2024-12-09 15:40:56.130140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.943 [2024-12-09 15:40:56.130201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ff121212 cdw11:0a000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.943 [2024-12-09 15:40:56.130214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.202 #34 NEW cov: 12381 ft: 14726 corp: 15/217b lim: 40 exec/s: 0 rss: 74Mb L: 25/40 MS: 1 ChangeBinInt- 00:08:01.202 [2024-12-09 15:40:56.189937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.202 [2024-12-09 15:40:56.189962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.202 #35 NEW cov: 12381 ft: 14802 corp: 16/230b lim: 40 exec/s: 35 rss: 74Mb L: 13/40 MS: 1 CMP- DE: "\010\000"- 00:08:01.202 [2024-12-09 15:40:56.250252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12424242 cdw11:12120a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.202 [2024-12-09 15:40:56.250277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.202 [2024-12-09 15:40:56.250340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.202 [2024-12-09 15:40:56.250353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.202 #36 NEW cov: 12381 ft: 15000 corp: 17/253b lim: 40 exec/s: 36 rss: 74Mb L: 23/40 MS: 1 InsertRepeatedBytes- 00:08:01.202 [2024-12-09 15:40:56.310281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12122c12 cdw11:124a0001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.202 [2024-12-09 15:40:56.310309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.202 #37 NEW cov: 12381 ft: 15020 corp: 18/262b lim: 40 exec/s: 37 rss: 74Mb L: 9/40 MS: 1 InsertByte- 00:08:01.202 [2024-12-09 15:40:56.350362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1212c40a cdw11:00100114 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.202 [2024-12-09 15:40:56.350388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.202 #38 NEW cov: 12381 ft: 15032 corp: 19/270b lim: 40 exec/s: 38 rss: 74Mb L: 8/40 MS: 1 CopyPart- 00:08:01.202 [2024-12-09 15:40:56.410598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1212124a cdw11:007a0114 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.202 [2024-12-09 15:40:56.410623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.461 #39 NEW cov: 12381 ft: 15085 corp: 20/278b lim: 40 exec/s: 39 rss: 74Mb L: 8/40 MS: 1 ChangeByte- 00:08:01.461 [2024-12-09 15:40:56.450692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12424242 cdw11:014114d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.461 [2024-12-09 15:40:56.450717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.461 #42 NEW cov: 12381 ft: 15089 corp: 21/286b lim: 40 exec/s: 42 rss: 74Mb L: 8/40 MS: 3 EraseBytes-InsertByte-InsertByte- 00:08:01.461 [2024-12-09 15:40:56.490774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12424212 cdw11:120a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.461 [2024-12-09 15:40:56.490799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.461 #43 NEW cov: 12381 ft: 15119 corp: 22/296b lim: 40 exec/s: 43 rss: 74Mb L: 10/40 MS: 1 EraseBytes- 00:08:01.461 [2024-12-09 15:40:56.531059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.462 [2024-12-09 15:40:56.531085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.462 [2024-12-09 15:40:56.531148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:1212120a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.462 [2024-12-09 15:40:56.531163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.462 #49 NEW cov: 12381 ft: 15136 corp: 23/316b lim: 40 exec/s: 49 rss: 74Mb L: 20/40 MS: 1 EraseBytes- 00:08:01.462 [2024-12-09 15:40:56.571299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.462 [2024-12-09 15:40:56.571324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.462 [2024-12-09 15:40:56.571387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2cfffeff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.462 [2024-12-09 15:40:56.571401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.462 [2024-12-09 15:40:56.571463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ff121212 cdw11:0a000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.462 [2024-12-09 15:40:56.571476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.462 #50 NEW cov: 12381 ft: 15185 corp: 24/341b lim: 40 exec/s: 50 rss: 74Mb L: 25/40 MS: 1 ChangeByte- 00:08:01.462 [2024-12-09 15:40:56.631203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.462 [2024-12-09 15:40:56.631228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.462 #51 NEW cov: 12381 ft: 15294 corp: 25/354b lim: 40 exec/s: 51 rss: 74Mb L: 13/40 MS: 1 ChangeBit- 00:08:01.721 [2024-12-09 15:40:56.691384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:bdbdbd08 cdw11:00bdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.721 [2024-12-09 15:40:56.691410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.721 #52 NEW cov: 12381 ft: 15309 corp: 26/363b lim: 40 exec/s: 52 rss: 74Mb L: 9/40 MS: 1 PersAutoDict- DE: "\010\000"- 00:08:01.721 [2024-12-09 15:40:56.751773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12121212 cdw11:4a000114 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.721 [2024-12-09 15:40:56.751798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.721 [2024-12-09 15:40:56.751881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:77777777 cdw11:77777777 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.721 [2024-12-09 15:40:56.751896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.721 [2024-12-09 15:40:56.751956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:77777777 cdw11:77777777 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.721 [2024-12-09 15:40:56.751970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.721 #53 NEW cov: 12381 ft: 15324 corp: 27/389b lim: 40 exec/s: 53 rss: 74Mb L: 26/40 MS: 1 InsertRepeatedBytes- 00:08:01.721 [2024-12-09 15:40:56.792090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.721 [2024-12-09 15:40:56.792115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.721 [2024-12-09 15:40:56.792194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.721 [2024-12-09 15:40:56.792208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.721 [2024-12-09 15:40:56.792269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.721 [2024-12-09 15:40:56.792283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.721 [2024-12-09 15:40:56.792341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.721 [2024-12-09 15:40:56.792355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:01.721 #54 NEW cov: 12381 ft: 15359 corp: 28/427b lim: 40 exec/s: 54 rss: 74Mb L: 38/40 MS: 1 CopyPart- 00:08:01.721 [2024-12-09 15:40:56.852048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.721 [2024-12-09 15:40:56.852071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.721 [2024-12-09 15:40:56.852151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.721 [2024-12-09 15:40:56.852169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.721 [2024-12-09 15:40:56.852230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ff121212 cdw11:0a000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.721 [2024-12-09 15:40:56.852244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.721 #55 NEW cov: 12381 ft: 15363 corp: 29/452b lim: 40 exec/s: 55 rss: 74Mb L: 25/40 MS: 1 ShuffleBytes- 00:08:01.721 [2024-12-09 15:40:56.892070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.721 [2024-12-09 15:40:56.892095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.721 [2024-12-09 15:40:56.892159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00424242 cdw11:014114d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.721 [2024-12-09 15:40:56.892173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.721 #56 NEW cov: 12381 ft: 15367 corp: 30/468b lim: 40 exec/s: 56 rss: 74Mb L: 16/40 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:08:01.981 [2024-12-09 15:40:56.952518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.981 [2024-12-09 15:40:56.952545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.981 [2024-12-09 15:40:56.952623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2cfffeff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.981 [2024-12-09 15:40:56.952637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.981 [2024-12-09 15:40:56.952698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff01 cdw11:587c0dff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.981 [2024-12-09 15:40:56.952711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.981 [2024-12-09 15:40:56.952770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ff121212 cdw11:0a000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.981 [2024-12-09 15:40:56.952783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:01.981 #57 NEW cov: 12381 ft: 15390 corp: 31/501b lim: 40 exec/s: 57 rss: 74Mb L: 33/40 MS: 1 CMP- DE: "\377\377\377\377\001X|\015"- 00:08:01.981 [2024-12-09 15:40:57.012557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.981 [2024-12-09 15:40:57.012583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.981 [2024-12-09 15:40:57.012665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2cfffeff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.981 [2024-12-09 15:40:57.012680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.981 [2024-12-09 15:40:57.012742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ff121212 cdw11:0a000011 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.981 [2024-12-09 15:40:57.012755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.981 #63 NEW cov: 12381 ft: 15398 corp: 32/529b lim: 40 exec/s: 63 rss: 74Mb L: 28/40 MS: 1 InsertRepeatedBytes- 00:08:01.981 [2024-12-09 15:40:57.052394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.981 [2024-12-09 15:40:57.052419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.981 #64 NEW cov: 12381 ft: 15476 corp: 33/540b lim: 40 exec/s: 64 rss: 74Mb L: 11/40 MS: 1 CMP- DE: "\001\014"- 00:08:01.981 [2024-12-09 15:40:57.092786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.981 [2024-12-09 15:40:57.092811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.981 [2024-12-09 15:40:57.092899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2cfffeff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.981 [2024-12-09 15:40:57.092915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.981 [2024-12-09 15:40:57.092976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ff121212 cdw11:0a000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.981 [2024-12-09 15:40:57.092989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.981 #65 NEW cov: 12381 ft: 15485 corp: 34/565b lim: 40 exec/s: 65 rss: 74Mb L: 25/40 MS: 1 CopyPart- 00:08:01.981 [2024-12-09 15:40:57.132606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9f121212 cdw11:124a0069 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.981 [2024-12-09 15:40:57.132631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.981 #66 NEW cov: 12381 ft: 15500 corp: 35/575b lim: 40 exec/s: 66 rss: 74Mb L: 10/40 MS: 1 InsertByte- 00:08:01.981 [2024-12-09 15:40:57.193070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffff08ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.981 [2024-12-09 15:40:57.193094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.981 [2024-12-09 15:40:57.193174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.981 [2024-12-09 15:40:57.193188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.981 [2024-12-09 15:40:57.193249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ff121212 cdw11:0a000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.981 [2024-12-09 15:40:57.193263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.241 #67 NEW cov: 12381 ft: 15504 corp: 36/600b lim: 40 exec/s: 33 rss: 74Mb L: 25/40 MS: 1 ChangeBinInt- 00:08:02.241 #67 DONE cov: 12381 ft: 15504 corp: 36/600b lim: 40 exec/s: 33 rss: 74Mb 00:08:02.241 ###### Recommended dictionary. ###### 00:08:02.241 "\000\000\001\024" # Uses: 0 00:08:02.241 "\010\000" # Uses: 1 00:08:02.241 "\000\000\000\000\000\000\000\000" # Uses: 0 00:08:02.241 "\377\377\377\377\001X|\015" # Uses: 0 00:08:02.241 "\001\014" # Uses: 0 00:08:02.241 ###### End of recommended dictionary. ###### 00:08:02.241 Done 67 runs in 2 second(s) 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:02.241 15:40:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:08:02.241 [2024-12-09 15:40:57.370188] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:02.242 [2024-12-09 15:40:57.370261] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908962 ] 00:08:02.501 [2024-12-09 15:40:57.639188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.501 [2024-12-09 15:40:57.693046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.760 [2024-12-09 15:40:57.752023] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.760 [2024-12-09 15:40:57.768161] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:08:02.760 INFO: Running with entropic power schedule (0xFF, 100). 00:08:02.760 INFO: Seed: 2610839080 00:08:02.760 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:08:02.760 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:08:02.760 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:02.760 INFO: A corpus is not provided, starting from an empty corpus 00:08:02.760 #2 INITED exec/s: 0 rss: 66Mb 00:08:02.760 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:02.760 This may also happen if the target rejected all inputs we tried so far 00:08:02.760 [2024-12-09 15:40:57.817366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:88696969 cdw11:69696969 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.760 [2024-12-09 15:40:57.817395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.020 NEW_FUNC[1/717]: 0x44a818 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:08:03.020 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:03.020 #5 NEW cov: 12166 ft: 12157 corp: 2/16b lim: 40 exec/s: 0 rss: 73Mb L: 15/15 MS: 3 CrossOver-ChangeByte-InsertRepeatedBytes- 00:08:03.020 [2024-12-09 15:40:58.138334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:88696969 cdw11:69696969 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.020 [2024-12-09 15:40:58.138384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.020 #6 NEW cov: 12279 ft: 12724 corp: 3/28b lim: 40 exec/s: 0 rss: 74Mb L: 12/15 MS: 1 EraseBytes- 00:08:03.020 [2024-12-09 15:40:58.198466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2c2c2c2c cdw11:2c2c2c2c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.020 [2024-12-09 15:40:58.198493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.020 [2024-12-09 15:40:58.198555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:2c2c2c2c cdw11:2c2c2c2c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.020 [2024-12-09 15:40:58.198569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.020 #12 NEW cov: 12285 ft: 13568 corp: 4/48b lim: 40 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:08:03.020 [2024-12-09 15:40:58.238371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:88696969 cdw11:6969696d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.020 [2024-12-09 15:40:58.238397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.279 #13 NEW cov: 12370 ft: 13938 corp: 5/60b lim: 40 exec/s: 0 rss: 74Mb L: 12/20 MS: 1 ChangeBit- 00:08:03.279 [2024-12-09 15:40:58.298872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.279 [2024-12-09 15:40:58.298898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.279 [2024-12-09 15:40:58.298959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.279 [2024-12-09 15:40:58.298973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.279 [2024-12-09 15:40:58.299031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.279 [2024-12-09 15:40:58.299046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.279 #15 NEW cov: 12370 ft: 14316 corp: 6/88b lim: 40 exec/s: 0 rss: 74Mb L: 28/28 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:03.279 [2024-12-09 15:40:58.338660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:88696969 cdw11:69692969 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.279 [2024-12-09 15:40:58.338686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.279 #16 NEW cov: 12370 ft: 14481 corp: 7/103b lim: 40 exec/s: 0 rss: 74Mb L: 15/28 MS: 1 ChangeBit- 00:08:03.279 [2024-12-09 15:40:58.379129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.279 [2024-12-09 15:40:58.379155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.280 [2024-12-09 15:40:58.379216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.280 [2024-12-09 15:40:58.379230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.280 [2024-12-09 15:40:58.379289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.280 [2024-12-09 15:40:58.379306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.280 #17 NEW cov: 12370 ft: 14555 corp: 8/131b lim: 40 exec/s: 0 rss: 74Mb L: 28/28 MS: 1 ChangeBinInt- 00:08:03.280 [2024-12-09 15:40:58.439459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.280 [2024-12-09 15:40:58.439485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.280 [2024-12-09 15:40:58.439544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.280 [2024-12-09 15:40:58.439558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.280 [2024-12-09 15:40:58.439634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.280 [2024-12-09 15:40:58.439649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.280 [2024-12-09 15:40:58.439707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.280 [2024-12-09 15:40:58.439720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.280 #18 NEW cov: 12370 ft: 14896 corp: 9/169b lim: 40 exec/s: 0 rss: 74Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:08:03.280 [2024-12-09 15:40:58.499483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.280 [2024-12-09 15:40:58.499509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.280 [2024-12-09 15:40:58.499585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.280 [2024-12-09 15:40:58.499599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.280 [2024-12-09 15:40:58.499656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fffffa30 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.280 [2024-12-09 15:40:58.499670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.539 #19 NEW cov: 12370 ft: 14951 corp: 10/193b lim: 40 exec/s: 0 rss: 74Mb L: 24/38 MS: 1 EraseBytes- 00:08:03.539 [2024-12-09 15:40:58.559470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.539 [2024-12-09 15:40:58.559496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.539 [2024-12-09 15:40:58.559572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.539 [2024-12-09 15:40:58.559587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.539 #20 NEW cov: 12370 ft: 14983 corp: 11/210b lim: 40 exec/s: 0 rss: 74Mb L: 17/38 MS: 1 InsertRepeatedBytes- 00:08:03.539 [2024-12-09 15:40:58.599740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.539 [2024-12-09 15:40:58.599766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.539 [2024-12-09 15:40:58.599827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.539 [2024-12-09 15:40:58.599850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.539 [2024-12-09 15:40:58.599908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:fffffffa cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.539 [2024-12-09 15:40:58.599922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.539 #21 NEW cov: 12370 ft: 14987 corp: 12/238b lim: 40 exec/s: 0 rss: 74Mb L: 28/38 MS: 1 ChangeBinInt- 00:08:03.539 [2024-12-09 15:40:58.639530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:88696969 cdw11:69696969 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.540 [2024-12-09 15:40:58.639555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.540 #22 NEW cov: 12370 ft: 15107 corp: 13/250b lim: 40 exec/s: 0 rss: 74Mb L: 12/38 MS: 1 CopyPart- 00:08:03.540 [2024-12-09 15:40:58.679962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.540 [2024-12-09 15:40:58.679986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.540 [2024-12-09 15:40:58.680049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.540 [2024-12-09 15:40:58.680062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.540 [2024-12-09 15:40:58.680120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.540 [2024-12-09 15:40:58.680134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.540 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:03.540 #23 NEW cov: 12393 ft: 15119 corp: 14/278b lim: 40 exec/s: 0 rss: 74Mb L: 28/38 MS: 1 ShuffleBytes- 00:08:03.540 [2024-12-09 15:40:58.719797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:88696969 cdw11:69692969 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.540 [2024-12-09 15:40:58.719823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.540 #24 NEW cov: 12393 ft: 15157 corp: 15/293b lim: 40 exec/s: 0 rss: 74Mb L: 15/38 MS: 1 ChangeBit- 00:08:03.799 [2024-12-09 15:40:58.780299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.799 [2024-12-09 15:40:58.780325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.799 [2024-12-09 15:40:58.780386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffff19 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.799 [2024-12-09 15:40:58.780400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.799 [2024-12-09 15:40:58.780461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:faffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.799 [2024-12-09 15:40:58.780474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.799 #25 NEW cov: 12393 ft: 15175 corp: 16/322b lim: 40 exec/s: 25 rss: 74Mb L: 29/38 MS: 1 InsertByte- 00:08:03.799 [2024-12-09 15:40:58.840451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff06 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.799 [2024-12-09 15:40:58.840480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.799 [2024-12-09 15:40:58.840541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.799 [2024-12-09 15:40:58.840555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.799 [2024-12-09 15:40:58.840614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.799 [2024-12-09 15:40:58.840627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.799 #26 NEW cov: 12393 ft: 15186 corp: 17/350b lim: 40 exec/s: 26 rss: 74Mb L: 28/38 MS: 1 ChangeBinInt- 00:08:03.799 [2024-12-09 15:40:58.900264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:88696969 cdw11:6969696d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.799 [2024-12-09 15:40:58.900289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.799 #32 NEW cov: 12393 ft: 15215 corp: 18/362b lim: 40 exec/s: 32 rss: 74Mb L: 12/38 MS: 1 ShuffleBytes- 00:08:03.799 [2024-12-09 15:40:58.960623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2c2c2c2c cdw11:2c2c2c25 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.799 [2024-12-09 15:40:58.960648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.799 [2024-12-09 15:40:58.960710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:2c2c2c2c cdw11:2c2c2c2c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.799 [2024-12-09 15:40:58.960725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.799 #33 NEW cov: 12393 ft: 15239 corp: 19/382b lim: 40 exec/s: 33 rss: 75Mb L: 20/38 MS: 1 ChangeBinInt- 00:08:03.799 [2024-12-09 15:40:59.020954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.799 [2024-12-09 15:40:59.020980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.799 [2024-12-09 15:40:59.021048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.799 [2024-12-09 15:40:59.021063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.799 [2024-12-09 15:40:59.021121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.800 [2024-12-09 15:40:59.021135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.059 #34 NEW cov: 12393 ft: 15292 corp: 20/410b lim: 40 exec/s: 34 rss: 75Mb L: 28/38 MS: 1 ShuffleBytes- 00:08:04.059 [2024-12-09 15:40:59.060731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2f696969 cdw11:69696969 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.059 [2024-12-09 15:40:59.060756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.059 #35 NEW cov: 12393 ft: 15324 corp: 21/422b lim: 40 exec/s: 35 rss: 75Mb L: 12/38 MS: 1 ChangeByte- 00:08:04.059 [2024-12-09 15:40:59.101145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.059 [2024-12-09 15:40:59.101173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.059 [2024-12-09 15:40:59.101234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.059 [2024-12-09 15:40:59.101248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.059 [2024-12-09 15:40:59.101309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.059 [2024-12-09 15:40:59.101322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.059 #36 NEW cov: 12393 ft: 15362 corp: 22/452b lim: 40 exec/s: 36 rss: 75Mb L: 30/38 MS: 1 CopyPart- 00:08:04.059 [2024-12-09 15:40:59.140937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:88696969 cdw11:79696969 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.059 [2024-12-09 15:40:59.140962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.059 #37 NEW cov: 12393 ft: 15387 corp: 23/464b lim: 40 exec/s: 37 rss: 75Mb L: 12/38 MS: 1 ChangeBit- 00:08:04.059 [2024-12-09 15:40:59.201118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:88696969 cdw11:69692969 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.059 [2024-12-09 15:40:59.201143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.059 #38 NEW cov: 12393 ft: 15440 corp: 24/479b lim: 40 exec/s: 38 rss: 75Mb L: 15/38 MS: 1 CrossOver- 00:08:04.059 [2024-12-09 15:40:59.241701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.059 [2024-12-09 15:40:59.241726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.059 [2024-12-09 15:40:59.241786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.059 [2024-12-09 15:40:59.241800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.059 [2024-12-09 15:40:59.241883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.059 [2024-12-09 15:40:59.241898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.059 [2024-12-09 15:40:59.241957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffff0c cdw11:0c0c0c0c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.059 [2024-12-09 15:40:59.241970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.319 #39 NEW cov: 12393 ft: 15444 corp: 25/514b lim: 40 exec/s: 39 rss: 75Mb L: 35/38 MS: 1 InsertRepeatedBytes- 00:08:04.319 [2024-12-09 15:40:59.301711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.319 [2024-12-09 15:40:59.301735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.319 [2024-12-09 15:40:59.301794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.319 [2024-12-09 15:40:59.301808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.319 [2024-12-09 15:40:59.301875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.319 [2024-12-09 15:40:59.301892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.319 #40 NEW cov: 12393 ft: 15464 corp: 26/542b lim: 40 exec/s: 40 rss: 75Mb L: 28/38 MS: 1 ShuffleBytes- 00:08:04.319 [2024-12-09 15:40:59.341466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:88696969 cdw11:69696940 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.319 [2024-12-09 15:40:59.341490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.319 #41 NEW cov: 12393 ft: 15487 corp: 27/557b lim: 40 exec/s: 41 rss: 75Mb L: 15/38 MS: 1 ChangeByte- 00:08:04.319 [2024-12-09 15:40:59.381723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:88696988 cdw11:69696979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.319 [2024-12-09 15:40:59.381748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.319 [2024-12-09 15:40:59.381809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:69696969 cdw11:69696979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.319 [2024-12-09 15:40:59.381823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.319 #42 NEW cov: 12393 ft: 15498 corp: 28/580b lim: 40 exec/s: 42 rss: 75Mb L: 23/38 MS: 1 CopyPart- 00:08:04.319 [2024-12-09 15:40:59.442417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:88696969 cdw11:69692969 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.319 [2024-12-09 15:40:59.442442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.319 [2024-12-09 15:40:59.442501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.319 [2024-12-09 15:40:59.442515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.319 [2024-12-09 15:40:59.442574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.319 [2024-12-09 15:40:59.442587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.319 [2024-12-09 15:40:59.442646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:faffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.319 [2024-12-09 15:40:59.442660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.319 [2024-12-09 15:40:59.442717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:306969ff cdw11:ffff6969 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.319 [2024-12-09 15:40:59.442731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:04.319 #43 NEW cov: 12393 ft: 15627 corp: 29/620b lim: 40 exec/s: 43 rss: 75Mb L: 40/40 MS: 1 CrossOver- 00:08:04.319 [2024-12-09 15:40:59.501924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.319 [2024-12-09 15:40:59.501949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.319 #44 NEW cov: 12393 ft: 15635 corp: 30/632b lim: 40 exec/s: 44 rss: 75Mb L: 12/40 MS: 1 CrossOver- 00:08:04.319 [2024-12-09 15:40:59.542192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:88696988 cdw11:692a6979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.319 [2024-12-09 15:40:59.542221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.319 [2024-12-09 15:40:59.542281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:69696969 cdw11:69696979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.319 [2024-12-09 15:40:59.542295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.579 #45 NEW cov: 12393 ft: 15649 corp: 31/655b lim: 40 exec/s: 45 rss: 75Mb L: 23/40 MS: 1 ChangeByte- 00:08:04.579 [2024-12-09 15:40:59.602373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:88696988 cdw11:792a6979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.579 [2024-12-09 15:40:59.602398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.579 [2024-12-09 15:40:59.602459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:69696969 cdw11:69696979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.579 [2024-12-09 15:40:59.602473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.579 #46 NEW cov: 12393 ft: 15665 corp: 32/678b lim: 40 exec/s: 46 rss: 75Mb L: 23/40 MS: 1 ChangeBit- 00:08:04.579 [2024-12-09 15:40:59.662561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:88696969 cdw11:69692969 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.579 [2024-12-09 15:40:59.662587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.579 [2024-12-09 15:40:59.662650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:69696969 cdw11:d5696969 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.579 [2024-12-09 15:40:59.662665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.579 #47 NEW cov: 12393 ft: 15718 corp: 33/694b lim: 40 exec/s: 47 rss: 75Mb L: 16/40 MS: 1 InsertByte- 00:08:04.579 [2024-12-09 15:40:59.702812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.579 [2024-12-09 15:40:59.702837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.579 [2024-12-09 15:40:59.702904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.579 [2024-12-09 15:40:59.702919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.579 [2024-12-09 15:40:59.702979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:3affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.579 [2024-12-09 15:40:59.702992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.579 #48 NEW cov: 12393 ft: 15755 corp: 34/722b lim: 40 exec/s: 48 rss: 75Mb L: 28/40 MS: 1 ChangeByte- 00:08:04.579 [2024-12-09 15:40:59.742746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2f696969 cdw11:69693169 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.579 [2024-12-09 15:40:59.742773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.579 #49 NEW cov: 12393 ft: 15800 corp: 35/734b lim: 40 exec/s: 49 rss: 75Mb L: 12/40 MS: 1 ChangeByte- 00:08:04.579 [2024-12-09 15:40:59.802732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:8869ff69 cdw11:69696969 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.579 [2024-12-09 15:40:59.802759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.839 #50 NEW cov: 12393 ft: 15814 corp: 36/747b lim: 40 exec/s: 25 rss: 75Mb L: 13/40 MS: 1 InsertByte- 00:08:04.839 #50 DONE cov: 12393 ft: 15814 corp: 36/747b lim: 40 exec/s: 25 rss: 75Mb 00:08:04.839 Done 50 runs in 2 second(s) 00:08:04.839 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:08:04.839 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:04.839 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:04.839 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:08:04.839 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:08:04.839 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:04.839 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:04.839 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:04.839 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:08:04.839 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:04.839 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:04.839 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:08:04.839 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:08:04.839 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:04.839 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:08:04.840 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:04.840 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:04.840 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:04.840 15:40:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:08:04.840 [2024-12-09 15:40:59.979855] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:04.840 [2024-12-09 15:40:59.979943] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid909337 ] 00:08:05.099 [2024-12-09 15:41:00.249932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.099 [2024-12-09 15:41:00.298359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.358 [2024-12-09 15:41:00.357940] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.358 [2024-12-09 15:41:00.374089] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:08:05.358 INFO: Running with entropic power schedule (0xFF, 100). 00:08:05.358 INFO: Seed: 922882180 00:08:05.358 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:08:05.358 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:08:05.358 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:05.358 INFO: A corpus is not provided, starting from an empty corpus 00:08:05.358 #2 INITED exec/s: 0 rss: 66Mb 00:08:05.358 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:05.358 This may also happen if the target rejected all inputs we tried so far 00:08:05.358 [2024-12-09 15:41:00.441768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.358 [2024-12-09 15:41:00.441825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.358 [2024-12-09 15:41:00.441941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.358 [2024-12-09 15:41:00.441962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.358 [2024-12-09 15:41:00.442070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.358 [2024-12-09 15:41:00.442088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.358 [2024-12-09 15:41:00.442206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.358 [2024-12-09 15:41:00.442225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.617 NEW_FUNC[1/717]: 0x44c588 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:08:05.617 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:05.617 #6 NEW cov: 12162 ft: 12165 corp: 2/39b lim: 40 exec/s: 0 rss: 73Mb L: 38/38 MS: 4 ChangeBit-ChangeBinInt-CrossOver-InsertRepeatedBytes- 00:08:05.617 [2024-12-09 15:41:00.792526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.617 [2024-12-09 15:41:00.792565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.617 [2024-12-09 15:41:00.792662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.617 [2024-12-09 15:41:00.792679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.617 #8 NEW cov: 12277 ft: 13168 corp: 3/61b lim: 40 exec/s: 0 rss: 73Mb L: 22/38 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:05.877 [2024-12-09 15:41:00.863494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.877 [2024-12-09 15:41:00.863523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.877 [2024-12-09 15:41:00.863654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.877 [2024-12-09 15:41:00.863673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.877 [2024-12-09 15:41:00.863767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.877 [2024-12-09 15:41:00.863786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.877 [2024-12-09 15:41:00.863894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.877 [2024-12-09 15:41:00.863911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.877 #9 NEW cov: 12283 ft: 13368 corp: 4/99b lim: 40 exec/s: 0 rss: 73Mb L: 38/38 MS: 1 ChangeBit- 00:08:05.877 [2024-12-09 15:41:00.913005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.877 [2024-12-09 15:41:00.913035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.877 [2024-12-09 15:41:00.913125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.877 [2024-12-09 15:41:00.913143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.877 #10 NEW cov: 12368 ft: 13557 corp: 5/121b lim: 40 exec/s: 0 rss: 73Mb L: 22/38 MS: 1 ChangeBinInt- 00:08:05.877 [2024-12-09 15:41:00.983973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.877 [2024-12-09 15:41:00.984001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.877 [2024-12-09 15:41:00.984088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.877 [2024-12-09 15:41:00.984104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.877 [2024-12-09 15:41:00.984203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.877 [2024-12-09 15:41:00.984219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.877 [2024-12-09 15:41:00.984314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.877 [2024-12-09 15:41:00.984329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.877 #16 NEW cov: 12368 ft: 13688 corp: 6/159b lim: 40 exec/s: 0 rss: 74Mb L: 38/38 MS: 1 ShuffleBytes- 00:08:05.877 [2024-12-09 15:41:01.053549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.877 [2024-12-09 15:41:01.053579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.877 [2024-12-09 15:41:01.053688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.877 [2024-12-09 15:41:01.053717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.877 #17 NEW cov: 12368 ft: 13756 corp: 7/181b lim: 40 exec/s: 0 rss: 74Mb L: 22/38 MS: 1 ChangeBinInt- 00:08:06.136 [2024-12-09 15:41:01.103813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2aff0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.136 [2024-12-09 15:41:01.103848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.136 [2024-12-09 15:41:01.103954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.136 [2024-12-09 15:41:01.103972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.136 #18 NEW cov: 12368 ft: 13837 corp: 8/204b lim: 40 exec/s: 0 rss: 74Mb L: 23/38 MS: 1 InsertByte- 00:08:06.136 [2024-12-09 15:41:01.153588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30ffffff cdw11:2affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.136 [2024-12-09 15:41:01.153616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.136 #22 NEW cov: 12368 ft: 14564 corp: 9/212b lim: 40 exec/s: 0 rss: 74Mb L: 8/38 MS: 4 ChangeByte-CrossOver-ChangeByte-InsertByte- 00:08:06.136 [2024-12-09 15:41:01.214989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.136 [2024-12-09 15:41:01.215017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.136 [2024-12-09 15:41:01.215112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.136 [2024-12-09 15:41:01.215129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.136 [2024-12-09 15:41:01.215221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.136 [2024-12-09 15:41:01.215237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.136 [2024-12-09 15:41:01.215325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.136 [2024-12-09 15:41:01.215342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.136 #23 NEW cov: 12368 ft: 14603 corp: 10/250b lim: 40 exec/s: 0 rss: 74Mb L: 38/38 MS: 1 CopyPart- 00:08:06.136 [2024-12-09 15:41:01.284264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30ffff2a cdw11:2affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.136 [2024-12-09 15:41:01.284292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.136 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:06.136 #24 NEW cov: 12391 ft: 14658 corp: 11/258b lim: 40 exec/s: 0 rss: 74Mb L: 8/38 MS: 1 ChangeByte- 00:08:06.136 [2024-12-09 15:41:01.355514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.136 [2024-12-09 15:41:01.355541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.136 [2024-12-09 15:41:01.355634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:fffffff7 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.137 [2024-12-09 15:41:01.355651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.137 [2024-12-09 15:41:01.355741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.137 [2024-12-09 15:41:01.355756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.137 [2024-12-09 15:41:01.355843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.137 [2024-12-09 15:41:01.355863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.396 #25 NEW cov: 12391 ft: 14707 corp: 12/296b lim: 40 exec/s: 0 rss: 74Mb L: 38/38 MS: 1 ChangeBit- 00:08:06.396 [2024-12-09 15:41:01.426135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.396 [2024-12-09 15:41:01.426162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.396 [2024-12-09 15:41:01.426258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.396 [2024-12-09 15:41:01.426280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.396 [2024-12-09 15:41:01.426360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ff26ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.396 [2024-12-09 15:41:01.426376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.396 [2024-12-09 15:41:01.426460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.396 [2024-12-09 15:41:01.426476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.396 #26 NEW cov: 12391 ft: 14764 corp: 13/334b lim: 40 exec/s: 26 rss: 74Mb L: 38/38 MS: 1 ChangeBinInt- 00:08:06.396 [2024-12-09 15:41:01.476063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.396 [2024-12-09 15:41:01.476091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.396 [2024-12-09 15:41:01.476182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:fffffff7 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.396 [2024-12-09 15:41:01.476200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.396 [2024-12-09 15:41:01.476287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff3a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.396 [2024-12-09 15:41:01.476304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.396 [2024-12-09 15:41:01.476390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.396 [2024-12-09 15:41:01.476406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.396 #27 NEW cov: 12391 ft: 14795 corp: 14/372b lim: 40 exec/s: 27 rss: 74Mb L: 38/38 MS: 1 ChangeByte- 00:08:06.396 [2024-12-09 15:41:01.546313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.396 [2024-12-09 15:41:01.546338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.396 [2024-12-09 15:41:01.546429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.396 [2024-12-09 15:41:01.546446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.396 [2024-12-09 15:41:01.546536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ff26ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.396 [2024-12-09 15:41:01.546551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.396 [2024-12-09 15:41:01.546635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:fffffff7 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.396 [2024-12-09 15:41:01.546653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.396 #28 NEW cov: 12391 ft: 14810 corp: 15/410b lim: 40 exec/s: 28 rss: 74Mb L: 38/38 MS: 1 ChangeBit- 00:08:06.396 [2024-12-09 15:41:01.616001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.396 [2024-12-09 15:41:01.616030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.396 [2024-12-09 15:41:01.616122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.396 [2024-12-09 15:41:01.616139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.656 #29 NEW cov: 12391 ft: 14957 corp: 16/432b lim: 40 exec/s: 29 rss: 74Mb L: 22/38 MS: 1 CopyPart- 00:08:06.656 [2024-12-09 15:41:01.686891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.656 [2024-12-09 15:41:01.686918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.656 [2024-12-09 15:41:01.687009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:fffffff7 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.656 [2024-12-09 15:41:01.687027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.656 [2024-12-09 15:41:01.687122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff3a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.656 [2024-12-09 15:41:01.687138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.656 [2024-12-09 15:41:01.687226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.656 [2024-12-09 15:41:01.687242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.656 #30 NEW cov: 12391 ft: 14976 corp: 17/470b lim: 40 exec/s: 30 rss: 74Mb L: 38/38 MS: 1 ShuffleBytes- 00:08:06.656 [2024-12-09 15:41:01.756474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2aff0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.656 [2024-12-09 15:41:01.756501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.656 [2024-12-09 15:41:01.756594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.656 [2024-12-09 15:41:01.756611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.656 #31 NEW cov: 12391 ft: 15005 corp: 18/493b lim: 40 exec/s: 31 rss: 74Mb L: 23/38 MS: 1 ChangeBit- 00:08:06.656 [2024-12-09 15:41:01.827281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.656 [2024-12-09 15:41:01.827306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.656 [2024-12-09 15:41:01.827399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffff7f cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.656 [2024-12-09 15:41:01.827415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.656 [2024-12-09 15:41:01.827506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ff26ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.656 [2024-12-09 15:41:01.827523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.656 [2024-12-09 15:41:01.827612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:fffffff7 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.656 [2024-12-09 15:41:01.827632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.656 #32 NEW cov: 12391 ft: 15019 corp: 19/531b lim: 40 exec/s: 32 rss: 74Mb L: 38/38 MS: 1 ChangeBit- 00:08:06.916 [2024-12-09 15:41:01.896552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30ff27ff cdw11:2affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.916 [2024-12-09 15:41:01.896577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.916 #33 NEW cov: 12391 ft: 15046 corp: 20/539b lim: 40 exec/s: 33 rss: 74Mb L: 8/38 MS: 1 ChangeByte- 00:08:06.916 [2024-12-09 15:41:01.947088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2abf0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.916 [2024-12-09 15:41:01.947114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.916 [2024-12-09 15:41:01.947193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.916 [2024-12-09 15:41:01.947209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.916 #34 NEW cov: 12391 ft: 15061 corp: 21/562b lim: 40 exec/s: 34 rss: 74Mb L: 23/38 MS: 1 ChangeBit- 00:08:06.916 [2024-12-09 15:41:01.997970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.916 [2024-12-09 15:41:01.997995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.916 [2024-12-09 15:41:01.998093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.916 [2024-12-09 15:41:01.998111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.916 [2024-12-09 15:41:01.998198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ff2604ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.916 [2024-12-09 15:41:01.998214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.916 [2024-12-09 15:41:01.998312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:fffffff7 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.916 [2024-12-09 15:41:01.998327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.916 #35 NEW cov: 12391 ft: 15069 corp: 22/600b lim: 40 exec/s: 35 rss: 74Mb L: 38/38 MS: 1 ChangeBinInt- 00:08:06.916 [2024-12-09 15:41:02.047935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2a2dff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.916 [2024-12-09 15:41:02.047960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.916 [2024-12-09 15:41:02.048048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00001000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.916 [2024-12-09 15:41:02.048064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.916 [2024-12-09 15:41:02.048156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.916 [2024-12-09 15:41:02.048171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.916 #36 NEW cov: 12391 ft: 15261 corp: 23/624b lim: 40 exec/s: 36 rss: 75Mb L: 24/38 MS: 1 InsertByte- 00:08:06.916 [2024-12-09 15:41:02.118606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.916 [2024-12-09 15:41:02.118632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.916 [2024-12-09 15:41:02.118729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.916 [2024-12-09 15:41:02.118745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.916 [2024-12-09 15:41:02.118834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.916 [2024-12-09 15:41:02.118853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.916 [2024-12-09 15:41:02.118943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.916 [2024-12-09 15:41:02.118960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.916 #37 NEW cov: 12391 ft: 15279 corp: 24/662b lim: 40 exec/s: 37 rss: 75Mb L: 38/38 MS: 1 ChangeBinInt- 00:08:07.176 [2024-12-09 15:41:02.168546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:252aff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.176 [2024-12-09 15:41:02.168572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.176 [2024-12-09 15:41:02.168666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00001000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.176 [2024-12-09 15:41:02.168682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.176 [2024-12-09 15:41:02.168770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.176 [2024-12-09 15:41:02.168787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.176 #40 NEW cov: 12391 ft: 15282 corp: 25/687b lim: 40 exec/s: 40 rss: 75Mb L: 25/38 MS: 3 ChangeByte-InsertByte-CrossOver- 00:08:07.176 [2024-12-09 15:41:02.218518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.176 [2024-12-09 15:41:02.218543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.176 [2024-12-09 15:41:02.218638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.176 [2024-12-09 15:41:02.218655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.176 #41 NEW cov: 12391 ft: 15312 corp: 26/709b lim: 40 exec/s: 41 rss: 75Mb L: 22/38 MS: 1 CrossOver- 00:08:07.176 [2024-12-09 15:41:02.289569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.176 [2024-12-09 15:41:02.289595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.176 [2024-12-09 15:41:02.289681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:fffffff7 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.176 [2024-12-09 15:41:02.289701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.176 [2024-12-09 15:41:02.289786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fbffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.176 [2024-12-09 15:41:02.289801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.176 [2024-12-09 15:41:02.289897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.176 [2024-12-09 15:41:02.289914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.176 #42 NEW cov: 12391 ft: 15322 corp: 27/747b lim: 40 exec/s: 42 rss: 75Mb L: 38/38 MS: 1 ChangeBit- 00:08:07.176 [2024-12-09 15:41:02.339935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.176 [2024-12-09 15:41:02.339963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.176 [2024-12-09 15:41:02.340049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.176 [2024-12-09 15:41:02.340066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.176 [2024-12-09 15:41:02.340148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.176 [2024-12-09 15:41:02.340163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.176 [2024-12-09 15:41:02.340248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.176 [2024-12-09 15:41:02.340264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.176 #43 NEW cov: 12391 ft: 15334 corp: 28/785b lim: 40 exec/s: 43 rss: 75Mb L: 38/38 MS: 1 CrossOver- 00:08:07.176 [2024-12-09 15:41:02.389249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30ffe000 cdw11:2affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.176 [2024-12-09 15:41:02.389278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.436 #44 NEW cov: 12391 ft: 15335 corp: 29/793b lim: 40 exec/s: 22 rss: 75Mb L: 8/38 MS: 1 ChangeBinInt- 00:08:07.436 #44 DONE cov: 12391 ft: 15335 corp: 29/793b lim: 40 exec/s: 22 rss: 75Mb 00:08:07.436 Done 44 runs in 2 second(s) 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:07.436 15:41:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:08:07.436 [2024-12-09 15:41:02.588752] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:07.436 [2024-12-09 15:41:02.588820] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid909674 ] 00:08:07.695 [2024-12-09 15:41:02.854368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.695 [2024-12-09 15:41:02.903041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.955 [2024-12-09 15:41:02.962440] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.955 [2024-12-09 15:41:02.978584] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:08:07.955 INFO: Running with entropic power schedule (0xFF, 100). 00:08:07.955 INFO: Seed: 3525876516 00:08:07.955 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:08:07.955 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:08:07.955 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:07.955 INFO: A corpus is not provided, starting from an empty corpus 00:08:07.955 #2 INITED exec/s: 0 rss: 66Mb 00:08:07.955 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:07.955 This may also happen if the target rejected all inputs we tried so far 00:08:07.955 [2024-12-09 15:41:03.026922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.955 [2024-12-09 15:41:03.026959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.955 [2024-12-09 15:41:03.026994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.955 [2024-12-09 15:41:03.027011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.955 [2024-12-09 15:41:03.027042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.955 [2024-12-09 15:41:03.027059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.215 NEW_FUNC[1/716]: 0x44e158 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:08:08.215 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:08.215 #4 NEW cov: 12152 ft: 12149 corp: 2/31b lim: 40 exec/s: 0 rss: 73Mb L: 30/30 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:08.215 [2024-12-09 15:41:03.377859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.215 [2024-12-09 15:41:03.377902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.215 [2024-12-09 15:41:03.377952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.215 [2024-12-09 15:41:03.377969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.215 [2024-12-09 15:41:03.378000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.215 [2024-12-09 15:41:03.378017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.215 [2024-12-09 15:41:03.378047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.215 [2024-12-09 15:41:03.378063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.475 #5 NEW cov: 12265 ft: 13080 corp: 3/64b lim: 40 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 CopyPart- 00:08:08.475 [2024-12-09 15:41:03.478039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.475 [2024-12-09 15:41:03.478072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.475 [2024-12-09 15:41:03.478122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.475 [2024-12-09 15:41:03.478138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.475 [2024-12-09 15:41:03.478169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.475 [2024-12-09 15:41:03.478186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.475 [2024-12-09 15:41:03.478217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.475 [2024-12-09 15:41:03.478232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.475 #11 NEW cov: 12271 ft: 13252 corp: 4/97b lim: 40 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 ShuffleBytes- 00:08:08.475 [2024-12-09 15:41:03.568263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.475 [2024-12-09 15:41:03.568294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.475 [2024-12-09 15:41:03.568329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.475 [2024-12-09 15:41:03.568345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.475 [2024-12-09 15:41:03.568376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.475 [2024-12-09 15:41:03.568393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.475 [2024-12-09 15:41:03.568427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:20202020 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.475 [2024-12-09 15:41:03.568443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.475 [2024-12-09 15:41:03.568474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:202020ff cdw11:ffff0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.475 [2024-12-09 15:41:03.568489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.475 #12 NEW cov: 12356 ft: 13632 corp: 5/137b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:08:08.475 [2024-12-09 15:41:03.628323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.475 [2024-12-09 15:41:03.628355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.475 [2024-12-09 15:41:03.628390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fff9ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.475 [2024-12-09 15:41:03.628406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.475 [2024-12-09 15:41:03.628437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.475 [2024-12-09 15:41:03.628453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.475 #18 NEW cov: 12356 ft: 13752 corp: 6/167b lim: 40 exec/s: 0 rss: 74Mb L: 30/40 MS: 1 ChangeBinInt- 00:08:08.475 [2024-12-09 15:41:03.688524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.475 [2024-12-09 15:41:03.688554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.475 [2024-12-09 15:41:03.688604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.475 [2024-12-09 15:41:03.688621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.475 [2024-12-09 15:41:03.688652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.475 [2024-12-09 15:41:03.688668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.475 [2024-12-09 15:41:03.688699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.475 [2024-12-09 15:41:03.688715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.735 #19 NEW cov: 12356 ft: 13885 corp: 7/200b lim: 40 exec/s: 0 rss: 74Mb L: 33/40 MS: 1 ChangeBinInt- 00:08:08.735 [2024-12-09 15:41:03.788785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.735 [2024-12-09 15:41:03.788818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.735 [2024-12-09 15:41:03.788861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.735 [2024-12-09 15:41:03.788879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.735 [2024-12-09 15:41:03.788914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.735 [2024-12-09 15:41:03.788931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.735 [2024-12-09 15:41:03.788961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.735 [2024-12-09 15:41:03.788977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.735 #20 NEW cov: 12356 ft: 13929 corp: 8/233b lim: 40 exec/s: 0 rss: 74Mb L: 33/40 MS: 1 ShuffleBytes- 00:08:08.735 [2024-12-09 15:41:03.838878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.735 [2024-12-09 15:41:03.838908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.736 [2024-12-09 15:41:03.838957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.736 [2024-12-09 15:41:03.838974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.736 [2024-12-09 15:41:03.839006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff20 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.736 [2024-12-09 15:41:03.839022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.736 [2024-12-09 15:41:03.839053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:20202020 cdw11:2020ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.736 [2024-12-09 15:41:03.839069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.736 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:08.736 #21 NEW cov: 12373 ft: 14055 corp: 9/268b lim: 40 exec/s: 0 rss: 74Mb L: 35/40 MS: 1 EraseBytes- 00:08:08.736 [2024-12-09 15:41:03.929118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.736 [2024-12-09 15:41:03.929149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.736 [2024-12-09 15:41:03.929198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffe9ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.736 [2024-12-09 15:41:03.929215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.736 [2024-12-09 15:41:03.929246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.736 [2024-12-09 15:41:03.929262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.995 #22 NEW cov: 12373 ft: 14114 corp: 10/298b lim: 40 exec/s: 0 rss: 74Mb L: 30/40 MS: 1 ChangeBit- 00:08:08.995 [2024-12-09 15:41:04.019364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.995 [2024-12-09 15:41:04.019400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.995 [2024-12-09 15:41:04.019449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff1616 cdw11:16161616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.995 [2024-12-09 15:41:04.019470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.995 [2024-12-09 15:41:04.019501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fffffff9 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.995 [2024-12-09 15:41:04.019517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.995 [2024-12-09 15:41:04.019548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.995 [2024-12-09 15:41:04.019563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.995 #23 NEW cov: 12373 ft: 14190 corp: 11/334b lim: 40 exec/s: 23 rss: 74Mb L: 36/40 MS: 1 InsertRepeatedBytes- 00:08:08.995 [2024-12-09 15:41:04.079491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.995 [2024-12-09 15:41:04.079521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.995 [2024-12-09 15:41:04.079570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.995 [2024-12-09 15:41:04.079587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.995 [2024-12-09 15:41:04.079618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.995 [2024-12-09 15:41:04.079634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.995 [2024-12-09 15:41:04.079664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ff202020 cdw11:20202020 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.995 [2024-12-09 15:41:04.079680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.995 #24 NEW cov: 12373 ft: 14223 corp: 12/371b lim: 40 exec/s: 24 rss: 74Mb L: 37/40 MS: 1 CrossOver- 00:08:08.995 [2024-12-09 15:41:04.169750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.995 [2024-12-09 15:41:04.169781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.995 [2024-12-09 15:41:04.169830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffe9ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.995 [2024-12-09 15:41:04.169853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.995 [2024-12-09 15:41:04.169884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.995 [2024-12-09 15:41:04.169901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.995 [2024-12-09 15:41:04.169932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.995 [2024-12-09 15:41:04.169948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.254 #25 NEW cov: 12373 ft: 14230 corp: 13/404b lim: 40 exec/s: 25 rss: 74Mb L: 33/40 MS: 1 CrossOver- 00:08:09.254 [2024-12-09 15:41:04.260013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.254 [2024-12-09 15:41:04.260043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.254 [2024-12-09 15:41:04.260079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffffffe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.254 [2024-12-09 15:41:04.260095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.254 [2024-12-09 15:41:04.260126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.254 [2024-12-09 15:41:04.260142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.254 [2024-12-09 15:41:04.260188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.254 [2024-12-09 15:41:04.260204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.254 #26 NEW cov: 12373 ft: 14258 corp: 14/437b lim: 40 exec/s: 26 rss: 74Mb L: 33/40 MS: 1 CopyPart- 00:08:09.254 [2024-12-09 15:41:04.350185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.254 [2024-12-09 15:41:04.350216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.254 [2024-12-09 15:41:04.350252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.254 [2024-12-09 15:41:04.350269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.254 [2024-12-09 15:41:04.350300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.254 [2024-12-09 15:41:04.350316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.254 #27 NEW cov: 12373 ft: 14290 corp: 15/464b lim: 40 exec/s: 27 rss: 74Mb L: 27/40 MS: 1 EraseBytes- 00:08:09.254 [2024-12-09 15:41:04.410412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:fff7ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.254 [2024-12-09 15:41:04.410443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.254 [2024-12-09 15:41:04.410478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff1616 cdw11:16161616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.255 [2024-12-09 15:41:04.410494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.255 [2024-12-09 15:41:04.410525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fffffff9 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.255 [2024-12-09 15:41:04.410541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.255 [2024-12-09 15:41:04.410571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.255 [2024-12-09 15:41:04.410587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.514 #28 NEW cov: 12373 ft: 14312 corp: 16/500b lim: 40 exec/s: 28 rss: 74Mb L: 36/40 MS: 1 ChangeBit- 00:08:09.514 [2024-12-09 15:41:04.500641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff7fffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.514 [2024-12-09 15:41:04.500687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.514 [2024-12-09 15:41:04.500723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffffffe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.514 [2024-12-09 15:41:04.500739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.514 [2024-12-09 15:41:04.500770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.514 [2024-12-09 15:41:04.500787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.514 [2024-12-09 15:41:04.500818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.514 [2024-12-09 15:41:04.500834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.514 #29 NEW cov: 12373 ft: 14361 corp: 17/533b lim: 40 exec/s: 29 rss: 75Mb L: 33/40 MS: 1 ChangeBit- 00:08:09.514 [2024-12-09 15:41:04.590713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a383838 cdw11:38383838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.514 [2024-12-09 15:41:04.590743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.514 [2024-12-09 15:41:04.590792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:38383838 cdw11:38383838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.514 [2024-12-09 15:41:04.590808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.514 #31 NEW cov: 12373 ft: 14646 corp: 18/556b lim: 40 exec/s: 31 rss: 75Mb L: 23/40 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:09.514 [2024-12-09 15:41:04.651021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.514 [2024-12-09 15:41:04.651051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.514 [2024-12-09 15:41:04.651086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff1616 cdw11:16161616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.514 [2024-12-09 15:41:04.651103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.514 [2024-12-09 15:41:04.651134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fffffff9 cdw11:fffff9ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.514 [2024-12-09 15:41:04.651150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.514 [2024-12-09 15:41:04.651180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.514 [2024-12-09 15:41:04.651196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.515 #32 NEW cov: 12373 ft: 14656 corp: 19/592b lim: 40 exec/s: 32 rss: 75Mb L: 36/40 MS: 1 CrossOver- 00:08:09.515 [2024-12-09 15:41:04.701074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:fffaffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.515 [2024-12-09 15:41:04.701110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.515 [2024-12-09 15:41:04.701145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffff9ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.515 [2024-12-09 15:41:04.701162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.515 [2024-12-09 15:41:04.701193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.515 [2024-12-09 15:41:04.701209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.515 #33 NEW cov: 12373 ft: 14677 corp: 20/623b lim: 40 exec/s: 33 rss: 75Mb L: 31/40 MS: 1 InsertByte- 00:08:09.774 [2024-12-09 15:41:04.751204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:fffaffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.774 [2024-12-09 15:41:04.751235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.774 [2024-12-09 15:41:04.751285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffff9ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.774 [2024-12-09 15:41:04.751301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.775 [2024-12-09 15:41:04.751332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.775 [2024-12-09 15:41:04.751349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.775 #34 NEW cov: 12373 ft: 14730 corp: 21/654b lim: 40 exec/s: 34 rss: 75Mb L: 31/40 MS: 1 ChangeByte- 00:08:09.775 [2024-12-09 15:41:04.841370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.775 [2024-12-09 15:41:04.841401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.775 [2024-12-09 15:41:04.841450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.775 [2024-12-09 15:41:04.841466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.775 #35 NEW cov: 12373 ft: 14739 corp: 22/676b lim: 40 exec/s: 35 rss: 75Mb L: 22/40 MS: 1 EraseBytes- 00:08:09.775 [2024-12-09 15:41:04.901650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:fffaffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.775 [2024-12-09 15:41:04.901681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.775 [2024-12-09 15:41:04.901715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffff9ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.775 [2024-12-09 15:41:04.901731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.775 [2024-12-09 15:41:04.901762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.775 [2024-12-09 15:41:04.901778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.775 #36 NEW cov: 12380 ft: 14792 corp: 23/707b lim: 40 exec/s: 36 rss: 75Mb L: 31/40 MS: 1 ChangeBit- 00:08:09.775 [2024-12-09 15:41:04.991905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.775 [2024-12-09 15:41:04.991942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.775 [2024-12-09 15:41:04.991994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff27 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.775 [2024-12-09 15:41:04.992010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.775 [2024-12-09 15:41:04.992043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.775 [2024-12-09 15:41:04.992059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.775 [2024-12-09 15:41:04.992091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ff202020 cdw11:20202020 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.775 [2024-12-09 15:41:04.992107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:10.036 #37 NEW cov: 12380 ft: 14802 corp: 24/744b lim: 40 exec/s: 18 rss: 75Mb L: 37/40 MS: 1 ChangeByte- 00:08:10.036 #37 DONE cov: 12380 ft: 14802 corp: 24/744b lim: 40 exec/s: 18 rss: 75Mb 00:08:10.036 Done 37 runs in 2 second(s) 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:10.036 15:41:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:08:10.036 [2024-12-09 15:41:05.232287] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:10.036 [2024-12-09 15:41:05.232360] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910017 ] 00:08:10.295 [2024-12-09 15:41:05.498622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.554 [2024-12-09 15:41:05.553836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.554 [2024-12-09 15:41:05.613126] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.554 [2024-12-09 15:41:05.629272] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:08:10.554 INFO: Running with entropic power schedule (0xFF, 100). 00:08:10.554 INFO: Seed: 1883908855 00:08:10.554 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:08:10.554 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:08:10.554 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:10.554 INFO: A corpus is not provided, starting from an empty corpus 00:08:10.554 #2 INITED exec/s: 0 rss: 66Mb 00:08:10.554 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:10.554 This may also happen if the target rejected all inputs we tried so far 00:08:10.813 NEW_FUNC[1/704]: 0x44fd28 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:08:10.813 NEW_FUNC[2/704]: 0x471278 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:08:10.813 #10 NEW cov: 12040 ft: 12036 corp: 2/8b lim: 35 exec/s: 0 rss: 74Mb L: 7/7 MS: 3 CrossOver-InsertByte-CMP- DE: "\001\000\000\001"- 00:08:11.072 #11 NEW cov: 12153 ft: 12394 corp: 3/15b lim: 35 exec/s: 0 rss: 74Mb L: 7/7 MS: 1 ChangeByte- 00:08:11.072 [2024-12-09 15:41:06.076138] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.072 [2024-12-09 15:41:06.076182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.072 [2024-12-09 15:41:06.076259] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.072 [2024-12-09 15:41:06.076276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.072 [2024-12-09 15:41:06.076333] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.072 [2024-12-09 15:41:06.076349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.072 [2024-12-09 15:41:06.076403] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.072 [2024-12-09 15:41:06.076419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.072 NEW_FUNC[1/15]: 0x194d518 in spdk_nvme_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:263 00:08:11.072 NEW_FUNC[2/15]: 0x194d758 in nvme_admin_qpair_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:202 00:08:11.072 #14 NEW cov: 12298 ft: 13418 corp: 4/49b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 3 ChangeByte-ChangeByte-InsertRepeatedBytes- 00:08:11.072 [2024-12-09 15:41:06.115750] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.072 [2024-12-09 15:41:06.115776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.072 #15 NEW cov: 12390 ft: 13948 corp: 5/56b lim: 35 exec/s: 0 rss: 74Mb L: 7/34 MS: 1 CMP- DE: "\000\037"- 00:08:11.072 NEW_FUNC[1/2]: 0x46a708 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:08:11.072 NEW_FUNC[2/2]: 0x13897e8 in nvmf_ctrlr_set_features_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1606 00:08:11.072 #16 NEW cov: 12447 ft: 14133 corp: 6/63b lim: 35 exec/s: 0 rss: 74Mb L: 7/34 MS: 1 ShuffleBytes- 00:08:11.072 [2024-12-09 15:41:06.196340] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.072 [2024-12-09 15:41:06.196367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.072 [2024-12-09 15:41:06.196440] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.072 [2024-12-09 15:41:06.196457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.072 #20 NEW cov: 12447 ft: 14379 corp: 7/84b lim: 35 exec/s: 0 rss: 74Mb L: 21/34 MS: 4 EraseBytes-ChangeBit-PersAutoDict-InsertRepeatedBytes- DE: "\001\000\000\001"- 00:08:11.072 [2024-12-09 15:41:06.236700] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.072 [2024-12-09 15:41:06.236725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.072 [2024-12-09 15:41:06.236797] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.072 [2024-12-09 15:41:06.236813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.072 [2024-12-09 15:41:06.236921] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.072 [2024-12-09 15:41:06.236937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.072 [2024-12-09 15:41:06.236992] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.072 [2024-12-09 15:41:06.237007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:11.072 #21 NEW cov: 12447 ft: 14613 corp: 8/119b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:08:11.331 #22 NEW cov: 12447 ft: 14644 corp: 9/126b lim: 35 exec/s: 0 rss: 74Mb L: 7/35 MS: 1 CopyPart- 00:08:11.331 #23 NEW cov: 12447 ft: 14701 corp: 10/133b lim: 35 exec/s: 0 rss: 74Mb L: 7/35 MS: 1 CrossOver- 00:08:11.331 [2024-12-09 15:41:06.416829] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000d3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.331 [2024-12-09 15:41:06.416865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.331 #24 NEW cov: 12447 ft: 14797 corp: 11/150b lim: 35 exec/s: 0 rss: 74Mb L: 17/35 MS: 1 InsertRepeatedBytes- 00:08:11.331 [2024-12-09 15:41:06.476984] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000004b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.331 [2024-12-09 15:41:06.477010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.331 #25 NEW cov: 12447 ft: 14810 corp: 12/170b lim: 35 exec/s: 0 rss: 74Mb L: 20/35 MS: 1 InsertRepeatedBytes- 00:08:11.331 [2024-12-09 15:41:06.517279] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000004b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.331 [2024-12-09 15:41:06.517304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.590 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:11.590 #31 NEW cov: 12470 ft: 14860 corp: 13/191b lim: 35 exec/s: 0 rss: 74Mb L: 21/35 MS: 1 InsertByte- 00:08:11.590 #32 NEW cov: 12470 ft: 14885 corp: 14/199b lim: 35 exec/s: 0 rss: 74Mb L: 8/35 MS: 1 InsertByte- 00:08:11.590 [2024-12-09 15:41:06.617779] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.590 [2024-12-09 15:41:06.617806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.590 [2024-12-09 15:41:06.617866] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.590 [2024-12-09 15:41:06.617884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.590 [2024-12-09 15:41:06.617982] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.590 [2024-12-09 15:41:06.617998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.590 [2024-12-09 15:41:06.618054] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.590 [2024-12-09 15:41:06.618068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:11.590 #33 NEW cov: 12470 ft: 14908 corp: 15/234b lim: 35 exec/s: 33 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:08:11.590 [2024-12-09 15:41:06.677691] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000004c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.590 [2024-12-09 15:41:06.677717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.590 [2024-12-09 15:41:06.677792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000004c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.590 [2024-12-09 15:41:06.677806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.590 #38 NEW cov: 12470 ft: 14944 corp: 16/257b lim: 35 exec/s: 38 rss: 74Mb L: 23/35 MS: 5 PersAutoDict-EraseBytes-InsertByte-InsertByte-InsertRepeatedBytes- DE: "\001\000\000\001"- 00:08:11.590 [2024-12-09 15:41:06.717785] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000004c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.590 [2024-12-09 15:41:06.717809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.590 [2024-12-09 15:41:06.717858] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000004c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.590 [2024-12-09 15:41:06.717870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.590 #39 NEW cov: 12479 ft: 14982 corp: 17/280b lim: 35 exec/s: 39 rss: 74Mb L: 23/35 MS: 1 ChangeBit- 00:08:11.590 [2024-12-09 15:41:06.777980] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.590 [2024-12-09 15:41:06.778008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.590 [2024-12-09 15:41:06.778067] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.590 [2024-12-09 15:41:06.778083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.590 #41 NEW cov: 12479 ft: 15012 corp: 18/302b lim: 35 exec/s: 41 rss: 75Mb L: 22/35 MS: 2 ShuffleBytes-CrossOver- 00:08:11.848 #42 NEW cov: 12479 ft: 15107 corp: 19/310b lim: 35 exec/s: 42 rss: 75Mb L: 8/35 MS: 1 InsertByte- 00:08:11.848 #43 NEW cov: 12479 ft: 15133 corp: 20/319b lim: 35 exec/s: 43 rss: 75Mb L: 9/35 MS: 1 InsertByte- 00:08:11.848 [2024-12-09 15:41:06.918546] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.848 [2024-12-09 15:41:06.918578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.848 [2024-12-09 15:41:06.918637] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.848 [2024-12-09 15:41:06.918653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.848 [2024-12-09 15:41:06.918710] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.848 [2024-12-09 15:41:06.918726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.848 [2024-12-09 15:41:06.918783] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.848 [2024-12-09 15:41:06.918799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.848 #44 NEW cov: 12479 ft: 15155 corp: 21/348b lim: 35 exec/s: 44 rss: 75Mb L: 29/35 MS: 1 EraseBytes- 00:08:11.848 #45 NEW cov: 12479 ft: 15182 corp: 22/356b lim: 35 exec/s: 45 rss: 75Mb L: 8/35 MS: 1 ChangeBit- 00:08:11.848 [2024-12-09 15:41:07.018455] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000004b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.848 [2024-12-09 15:41:07.018480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.848 #46 NEW cov: 12479 ft: 15196 corp: 23/376b lim: 35 exec/s: 46 rss: 75Mb L: 20/35 MS: 1 ChangeBit- 00:08:11.848 [2024-12-09 15:41:07.058995] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.848 [2024-12-09 15:41:07.059022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.848 [2024-12-09 15:41:07.059078] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.848 [2024-12-09 15:41:07.059094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.848 [2024-12-09 15:41:07.059153] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.848 [2024-12-09 15:41:07.059169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.848 [2024-12-09 15:41:07.059226] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.848 [2024-12-09 15:41:07.059241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.848 [2024-12-09 15:41:07.059301] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.848 [2024-12-09 15:41:07.059318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:12.107 #47 NEW cov: 12479 ft: 15210 corp: 24/411b lim: 35 exec/s: 47 rss: 75Mb L: 35/35 MS: 1 InsertByte- 00:08:12.107 [2024-12-09 15:41:07.099123] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.107 [2024-12-09 15:41:07.099149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.107 [2024-12-09 15:41:07.099208] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.107 [2024-12-09 15:41:07.099225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.107 [2024-12-09 15:41:07.099284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.107 [2024-12-09 15:41:07.099300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.107 [2024-12-09 15:41:07.099357] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.107 [2024-12-09 15:41:07.099373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.107 [2024-12-09 15:41:07.099427] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.107 [2024-12-09 15:41:07.099442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:12.107 #48 NEW cov: 12479 ft: 15280 corp: 25/446b lim: 35 exec/s: 48 rss: 75Mb L: 35/35 MS: 1 ChangeByte- 00:08:12.107 [2024-12-09 15:41:07.158914] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000004b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.107 [2024-12-09 15:41:07.158938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.107 #49 NEW cov: 12479 ft: 15295 corp: 26/466b lim: 35 exec/s: 49 rss: 75Mb L: 20/35 MS: 1 ChangeBit- 00:08:12.107 [2024-12-09 15:41:07.219371] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000a0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.107 [2024-12-09 15:41:07.219396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.107 [2024-12-09 15:41:07.219474] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.107 [2024-12-09 15:41:07.219489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.107 [2024-12-09 15:41:07.219548] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.107 [2024-12-09 15:41:07.219561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.107 [2024-12-09 15:41:07.219617] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.107 [2024-12-09 15:41:07.219630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.107 #53 NEW cov: 12479 ft: 15310 corp: 27/499b lim: 35 exec/s: 53 rss: 75Mb L: 33/35 MS: 4 InsertByte-ChangeByte-InsertByte-InsertRepeatedBytes- 00:08:12.107 #54 NEW cov: 12479 ft: 15318 corp: 28/508b lim: 35 exec/s: 54 rss: 75Mb L: 9/35 MS: 1 InsertByte- 00:08:12.107 [2024-12-09 15:41:07.319638] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000a0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.107 [2024-12-09 15:41:07.319664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.107 [2024-12-09 15:41:07.319725] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.107 [2024-12-09 15:41:07.319739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.107 [2024-12-09 15:41:07.319797] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.107 [2024-12-09 15:41:07.319811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.107 [2024-12-09 15:41:07.319875] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.107 [2024-12-09 15:41:07.319889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.366 #55 NEW cov: 12479 ft: 15345 corp: 29/542b lim: 35 exec/s: 55 rss: 75Mb L: 34/35 MS: 1 InsertByte- 00:08:12.366 [2024-12-09 15:41:07.379906] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.366 [2024-12-09 15:41:07.379931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.366 [2024-12-09 15:41:07.379991] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.366 [2024-12-09 15:41:07.380007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.366 [2024-12-09 15:41:07.380103] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.366 [2024-12-09 15:41:07.380117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.366 [2024-12-09 15:41:07.380173] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.366 [2024-12-09 15:41:07.380188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:12.366 #56 NEW cov: 12479 ft: 15359 corp: 30/577b lim: 35 exec/s: 56 rss: 75Mb L: 35/35 MS: 1 ChangeBinInt- 00:08:12.366 #57 NEW cov: 12479 ft: 15397 corp: 31/584b lim: 35 exec/s: 57 rss: 75Mb L: 7/35 MS: 1 ChangeBit- 00:08:12.366 #58 NEW cov: 12479 ft: 15419 corp: 32/591b lim: 35 exec/s: 58 rss: 75Mb L: 7/35 MS: 1 ShuffleBytes- 00:08:12.366 #59 NEW cov: 12479 ft: 15428 corp: 33/598b lim: 35 exec/s: 59 rss: 75Mb L: 7/35 MS: 1 ChangeBit- 00:08:12.366 #62 NEW cov: 12479 ft: 15434 corp: 34/607b lim: 35 exec/s: 62 rss: 75Mb L: 9/35 MS: 3 EraseBytes-ChangeBinInt-CopyPart- 00:08:12.624 #63 NEW cov: 12479 ft: 15440 corp: 35/614b lim: 35 exec/s: 63 rss: 75Mb L: 7/35 MS: 1 EraseBytes- 00:08:12.624 [2024-12-09 15:41:07.640511] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.624 [2024-12-09 15:41:07.640540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.624 [2024-12-09 15:41:07.640599] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.624 [2024-12-09 15:41:07.640615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.624 [2024-12-09 15:41:07.640688] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.624 [2024-12-09 15:41:07.640705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.624 #64 pulse cov: 12479 ft: 15551 corp: 35/614b lim: 35 exec/s: 32 rss: 75Mb 00:08:12.624 #64 NEW cov: 12479 ft: 15551 corp: 36/642b lim: 35 exec/s: 32 rss: 75Mb L: 28/35 MS: 1 InsertRepeatedBytes- 00:08:12.624 #64 DONE cov: 12479 ft: 15551 corp: 36/642b lim: 35 exec/s: 32 rss: 75Mb 00:08:12.624 ###### Recommended dictionary. ###### 00:08:12.624 "\001\000\000\001" # Uses: 3 00:08:12.624 "\000\037" # Uses: 0 00:08:12.624 ###### End of recommended dictionary. ###### 00:08:12.624 Done 64 runs in 2 second(s) 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:12.624 15:41:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:08:12.624 [2024-12-09 15:41:07.847168] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:12.624 [2024-12-09 15:41:07.847242] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910359 ] 00:08:13.193 [2024-12-09 15:41:08.117099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.193 [2024-12-09 15:41:08.171682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.193 [2024-12-09 15:41:08.230697] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.193 [2024-12-09 15:41:08.246843] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:08:13.193 INFO: Running with entropic power schedule (0xFF, 100). 00:08:13.193 INFO: Seed: 204970474 00:08:13.194 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:08:13.194 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:08:13.194 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:13.194 INFO: A corpus is not provided, starting from an empty corpus 00:08:13.194 #2 INITED exec/s: 0 rss: 66Mb 00:08:13.194 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:13.194 This may also happen if the target rejected all inputs we tried so far 00:08:13.194 [2024-12-09 15:41:08.292549] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.194 [2024-12-09 15:41:08.292578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.454 NEW_FUNC[1/717]: 0x451268 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:08:13.454 NEW_FUNC[2/717]: 0x46a708 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:08:13.454 #10 NEW cov: 12172 ft: 12168 corp: 2/17b lim: 35 exec/s: 0 rss: 74Mb L: 16/16 MS: 3 ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:08:13.454 [2024-12-09 15:41:08.633324] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.454 [2024-12-09 15:41:08.633360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.454 #11 NEW cov: 12285 ft: 12720 corp: 3/33b lim: 35 exec/s: 0 rss: 74Mb L: 16/16 MS: 1 ChangeBit- 00:08:13.713 [2024-12-09 15:41:08.693539] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.713 [2024-12-09 15:41:08.693567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.713 [2024-12-09 15:41:08.693627] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.714 [2024-12-09 15:41:08.693640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.714 #12 NEW cov: 12291 ft: 13142 corp: 4/57b lim: 35 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:08:13.714 [2024-12-09 15:41:08.753693] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000017 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.714 [2024-12-09 15:41:08.753719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.714 [2024-12-09 15:41:08.753776] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES LBA RANGE TYPE cid:6 cdw10:00000703 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.714 [2024-12-09 15:41:08.753789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.714 NEW_FUNC[1/1]: 0x46c4b8 in feat_lba_range_type /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:289 00:08:13.714 #13 NEW cov: 12387 ft: 13394 corp: 5/81b lim: 35 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 ChangeBinInt- 00:08:13.714 [2024-12-09 15:41:08.813745] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.714 [2024-12-09 15:41:08.813772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.714 #14 NEW cov: 12387 ft: 13568 corp: 6/97b lim: 35 exec/s: 0 rss: 74Mb L: 16/24 MS: 1 ChangeBinInt- 00:08:13.714 [2024-12-09 15:41:08.853988] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000017 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.714 [2024-12-09 15:41:08.854013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.714 [2024-12-09 15:41:08.854084] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES LBA RANGE TYPE cid:6 cdw10:00000703 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.714 [2024-12-09 15:41:08.854099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.714 #15 NEW cov: 12387 ft: 13622 corp: 7/121b lim: 35 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 CopyPart- 00:08:13.714 #16 NEW cov: 12387 ft: 13888 corp: 8/131b lim: 35 exec/s: 0 rss: 75Mb L: 10/24 MS: 1 EraseBytes- 00:08:13.973 [2024-12-09 15:41:08.953978] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000002ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.973 [2024-12-09 15:41:08.954005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.973 #20 NEW cov: 12387 ft: 13974 corp: 9/140b lim: 35 exec/s: 0 rss: 75Mb L: 9/24 MS: 4 ChangeByte-ShuffleBytes-ShuffleBytes-CMP- DE: "\377Q\224\034\303\311\260\244"- 00:08:13.973 [2024-12-09 15:41:08.994046] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000021 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.973 [2024-12-09 15:41:08.994072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.973 #22 NEW cov: 12387 ft: 13981 corp: 10/149b lim: 35 exec/s: 0 rss: 75Mb L: 9/24 MS: 2 ChangeByte-CMP- DE: "\001R\224\034\250\321\303\026"- 00:08:13.973 [2024-12-09 15:41:09.034451] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.973 [2024-12-09 15:41:09.034476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.973 [2024-12-09 15:41:09.034549] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.973 [2024-12-09 15:41:09.034563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.973 #23 NEW cov: 12387 ft: 14020 corp: 11/173b lim: 35 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 CrossOver- 00:08:13.973 [2024-12-09 15:41:09.074751] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000017 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.973 [2024-12-09 15:41:09.074776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.973 [2024-12-09 15:41:09.074854] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES LBA RANGE TYPE cid:6 cdw10:00000703 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.973 [2024-12-09 15:41:09.074869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.973 [2024-12-09 15:41:09.074926] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000000e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.973 [2024-12-09 15:41:09.074941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.973 #29 NEW cov: 12387 ft: 14527 corp: 12/205b lim: 35 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 PersAutoDict- DE: "\001R\224\034\250\321\303\026"- 00:08:13.973 [2024-12-09 15:41:09.114588] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.973 [2024-12-09 15:41:09.114613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.973 #30 NEW cov: 12387 ft: 14556 corp: 13/222b lim: 35 exec/s: 0 rss: 75Mb L: 17/32 MS: 1 CrossOver- 00:08:13.973 [2024-12-09 15:41:09.174611] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000021 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.973 [2024-12-09 15:41:09.174635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.232 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:14.232 #31 NEW cov: 12410 ft: 14638 corp: 14/231b lim: 35 exec/s: 0 rss: 75Mb L: 9/32 MS: 1 ChangeBit- 00:08:14.232 [2024-12-09 15:41:09.234743] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000002ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.232 [2024-12-09 15:41:09.234769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.232 #32 NEW cov: 12410 ft: 14670 corp: 15/243b lim: 35 exec/s: 32 rss: 75Mb L: 12/32 MS: 1 CrossOver- 00:08:14.232 [2024-12-09 15:41:09.295365] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000017 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.232 [2024-12-09 15:41:09.295390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.232 [2024-12-09 15:41:09.295449] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.232 [2024-12-09 15:41:09.295462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.232 [2024-12-09 15:41:09.295524] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.232 [2024-12-09 15:41:09.295538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.232 #33 NEW cov: 12410 ft: 14694 corp: 16/276b lim: 35 exec/s: 33 rss: 75Mb L: 33/33 MS: 1 CrossOver- 00:08:14.232 [2024-12-09 15:41:09.355074] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000002ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.232 [2024-12-09 15:41:09.355098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.232 #34 NEW cov: 12410 ft: 14725 corp: 17/286b lim: 35 exec/s: 34 rss: 75Mb L: 10/33 MS: 1 InsertByte- 00:08:14.232 [2024-12-09 15:41:09.395484] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000017 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.232 [2024-12-09 15:41:09.395508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.232 [2024-12-09 15:41:09.395583] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.232 [2024-12-09 15:41:09.395598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.232 #35 NEW cov: 12410 ft: 14742 corp: 18/311b lim: 35 exec/s: 35 rss: 75Mb L: 25/33 MS: 1 EraseBytes- 00:08:14.232 [2024-12-09 15:41:09.455849] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.232 [2024-12-09 15:41:09.455875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.232 [2024-12-09 15:41:09.455934] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.232 [2024-12-09 15:41:09.455948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.232 [2024-12-09 15:41:09.456007] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005b5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.232 [2024-12-09 15:41:09.456021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.491 #36 NEW cov: 12410 ft: 14751 corp: 19/343b lim: 35 exec/s: 36 rss: 75Mb L: 32/33 MS: 1 InsertRepeatedBytes- 00:08:14.491 NEW_FUNC[1/1]: 0x1384748 in nvmf_ctrlr_get_features_host_behavior_support /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1940 00:08:14.491 #37 NEW cov: 12435 ft: 14867 corp: 20/359b lim: 35 exec/s: 37 rss: 75Mb L: 16/33 MS: 1 ChangeBinInt- 00:08:14.491 [2024-12-09 15:41:09.565813] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000452 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.492 [2024-12-09 15:41:09.565839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.492 #38 NEW cov: 12435 ft: 14927 corp: 21/375b lim: 35 exec/s: 38 rss: 75Mb L: 16/33 MS: 1 PersAutoDict- DE: "\001R\224\034\250\321\303\026"- 00:08:14.492 #39 NEW cov: 12435 ft: 14936 corp: 22/391b lim: 35 exec/s: 39 rss: 75Mb L: 16/33 MS: 1 ChangeByte- 00:08:14.492 [2024-12-09 15:41:09.665903] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000021 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.492 [2024-12-09 15:41:09.665928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.492 #40 NEW cov: 12435 ft: 14987 corp: 23/404b lim: 35 exec/s: 40 rss: 75Mb L: 13/33 MS: 1 CopyPart- 00:08:14.751 #41 NEW cov: 12435 ft: 15056 corp: 24/420b lim: 35 exec/s: 41 rss: 75Mb L: 16/33 MS: 1 ChangeByte- 00:08:14.751 [2024-12-09 15:41:09.786542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000152 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.751 [2024-12-09 15:41:09.786572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.751 [2024-12-09 15:41:09.786650] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.751 [2024-12-09 15:41:09.786665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.751 #42 NEW cov: 12435 ft: 15147 corp: 25/445b lim: 35 exec/s: 42 rss: 75Mb L: 25/33 MS: 1 CrossOver- 00:08:14.751 #43 NEW cov: 12435 ft: 15207 corp: 26/462b lim: 35 exec/s: 43 rss: 75Mb L: 17/33 MS: 1 PersAutoDict- DE: "\001R\224\034\250\321\303\026"- 00:08:14.751 [2024-12-09 15:41:09.886831] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.751 [2024-12-09 15:41:09.886863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.751 [2024-12-09 15:41:09.886940] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.751 [2024-12-09 15:41:09.886955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.751 #44 NEW cov: 12435 ft: 15215 corp: 27/486b lim: 35 exec/s: 44 rss: 76Mb L: 24/33 MS: 1 CopyPart- 00:08:14.751 [2024-12-09 15:41:09.946801] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000002ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.751 [2024-12-09 15:41:09.946827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.751 [2024-12-09 15:41:09.946893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000c3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.751 [2024-12-09 15:41:09.946909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.010 #45 NEW cov: 12435 ft: 15227 corp: 28/506b lim: 35 exec/s: 45 rss: 76Mb L: 20/33 MS: 1 PersAutoDict- DE: "\001R\224\034\250\321\303\026"- 00:08:15.010 [2024-12-09 15:41:10.007211] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.010 [2024-12-09 15:41:10.007237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.010 [2024-12-09 15:41:10.007298] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.010 [2024-12-09 15:41:10.007313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.010 #46 NEW cov: 12435 ft: 15233 corp: 29/531b lim: 35 exec/s: 46 rss: 76Mb L: 25/33 MS: 1 PersAutoDict- DE: "\377Q\224\034\303\311\260\244"- 00:08:15.010 [2024-12-09 15:41:10.067339] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.010 [2024-12-09 15:41:10.067374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.010 [2024-12-09 15:41:10.067436] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.010 [2024-12-09 15:41:10.067450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.010 #47 NEW cov: 12435 ft: 15262 corp: 30/557b lim: 35 exec/s: 47 rss: 76Mb L: 26/33 MS: 1 CrossOver- 00:08:15.010 [2024-12-09 15:41:10.127397] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000451 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.010 [2024-12-09 15:41:10.127427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.010 #48 NEW cov: 12435 ft: 15276 corp: 31/573b lim: 35 exec/s: 48 rss: 76Mb L: 16/33 MS: 1 PersAutoDict- DE: "\377Q\224\034\303\311\260\244"- 00:08:15.010 [2024-12-09 15:41:10.167564] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.010 [2024-12-09 15:41:10.167591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.010 [2024-12-09 15:41:10.167669] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.010 [2024-12-09 15:41:10.167683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.010 #49 NEW cov: 12435 ft: 15282 corp: 32/599b lim: 35 exec/s: 49 rss: 76Mb L: 26/33 MS: 1 ShuffleBytes- 00:08:15.010 [2024-12-09 15:41:10.227515] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000002ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.010 [2024-12-09 15:41:10.227542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.270 #50 NEW cov: 12435 ft: 15291 corp: 33/609b lim: 35 exec/s: 50 rss: 77Mb L: 10/33 MS: 1 ChangeByte- 00:08:15.270 [2024-12-09 15:41:10.287979] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.270 [2024-12-09 15:41:10.288007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.270 [2024-12-09 15:41:10.288083] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.270 [2024-12-09 15:41:10.288097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.270 #51 NEW cov: 12435 ft: 15329 corp: 34/631b lim: 35 exec/s: 25 rss: 77Mb L: 22/33 MS: 1 EraseBytes- 00:08:15.270 #51 DONE cov: 12435 ft: 15329 corp: 34/631b lim: 35 exec/s: 25 rss: 77Mb 00:08:15.270 ###### Recommended dictionary. ###### 00:08:15.270 "\377Q\224\034\303\311\260\244" # Uses: 2 00:08:15.270 "\001R\224\034\250\321\303\026" # Uses: 4 00:08:15.270 ###### End of recommended dictionary. ###### 00:08:15.270 Done 51 runs in 2 second(s) 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:15.270 15:41:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:08:15.270 [2024-12-09 15:41:10.486129] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:15.270 [2024-12-09 15:41:10.486199] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910707 ] 00:08:15.529 [2024-12-09 15:41:10.754668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.788 [2024-12-09 15:41:10.809067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.788 [2024-12-09 15:41:10.868239] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.788 [2024-12-09 15:41:10.884380] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:08:15.788 INFO: Running with entropic power schedule (0xFF, 100). 00:08:15.788 INFO: Seed: 2843960429 00:08:15.788 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:08:15.788 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:08:15.788 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:15.788 INFO: A corpus is not provided, starting from an empty corpus 00:08:15.788 #2 INITED exec/s: 0 rss: 66Mb 00:08:15.788 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:15.788 This may also happen if the target rejected all inputs we tried so far 00:08:15.788 [2024-12-09 15:41:10.929229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.788 [2024-12-09 15:41:10.929267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.357 NEW_FUNC[1/716]: 0x452728 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:08:16.357 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:16.357 #6 NEW cov: 12214 ft: 12213 corp: 2/24b lim: 105 exec/s: 0 rss: 73Mb L: 23/23 MS: 4 InsertByte-ChangeByte-CrossOver-InsertRepeatedBytes- 00:08:16.357 [2024-12-09 15:41:11.302018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:34209792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.357 [2024-12-09 15:41:11.302071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.357 NEW_FUNC[1/1]: 0x1faaeb8 in thread_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1105 00:08:16.357 #11 NEW cov: 12351 ft: 12829 corp: 3/52b lim: 105 exec/s: 0 rss: 73Mb L: 28/28 MS: 5 ShuffleBytes-CopyPart-ChangeBit-CrossOver-InsertRepeatedBytes- 00:08:16.357 [2024-12-09 15:41:11.362162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:34209792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.357 [2024-12-09 15:41:11.362190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.357 #12 NEW cov: 12357 ft: 13078 corp: 4/92b lim: 105 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 CopyPart- 00:08:16.357 [2024-12-09 15:41:11.432691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:34209792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.357 [2024-12-09 15:41:11.432721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.357 [2024-12-09 15:41:11.432782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.357 [2024-12-09 15:41:11.432800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.357 #13 NEW cov: 12442 ft: 13729 corp: 5/147b lim: 105 exec/s: 0 rss: 73Mb L: 55/55 MS: 1 CopyPart- 00:08:16.357 [2024-12-09 15:41:11.482840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:34209792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.357 [2024-12-09 15:41:11.482872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.357 [2024-12-09 15:41:11.482934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.357 [2024-12-09 15:41:11.482951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.357 #14 NEW cov: 12442 ft: 13794 corp: 6/190b lim: 105 exec/s: 0 rss: 73Mb L: 43/55 MS: 1 CrossOver- 00:08:16.357 [2024-12-09 15:41:11.553159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:144680345676153346 len:515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.357 [2024-12-09 15:41:11.553187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.357 [2024-12-09 15:41:11.553244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:144680345676153346 len:515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.357 [2024-12-09 15:41:11.553264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.357 #15 NEW cov: 12442 ft: 13860 corp: 7/249b lim: 105 exec/s: 0 rss: 73Mb L: 59/59 MS: 1 InsertRepeatedBytes- 00:08:16.617 [2024-12-09 15:41:11.603004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.617 [2024-12-09 15:41:11.603034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.617 #20 NEW cov: 12442 ft: 13970 corp: 8/288b lim: 105 exec/s: 0 rss: 73Mb L: 39/59 MS: 5 ShuffleBytes-CopyPart-ChangeByte-EraseBytes-InsertRepeatedBytes- 00:08:16.617 [2024-12-09 15:41:11.653202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.617 [2024-12-09 15:41:11.653229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.617 #26 NEW cov: 12442 ft: 14014 corp: 9/311b lim: 105 exec/s: 0 rss: 74Mb L: 23/59 MS: 1 CrossOver- 00:08:16.617 [2024-12-09 15:41:11.723544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:33554432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.617 [2024-12-09 15:41:11.723573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.617 #28 NEW cov: 12442 ft: 14101 corp: 10/346b lim: 105 exec/s: 0 rss: 74Mb L: 35/59 MS: 2 ChangeBit-InsertRepeatedBytes- 00:08:16.617 [2024-12-09 15:41:11.773733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.617 [2024-12-09 15:41:11.773763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.617 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:16.617 #29 NEW cov: 12459 ft: 14147 corp: 11/385b lim: 105 exec/s: 0 rss: 74Mb L: 39/59 MS: 1 ShuffleBytes- 00:08:16.877 [2024-12-09 15:41:11.844176] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.877 [2024-12-09 15:41:11.844209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.877 #30 NEW cov: 12459 ft: 14212 corp: 12/424b lim: 105 exec/s: 0 rss: 74Mb L: 39/59 MS: 1 ShuffleBytes- 00:08:16.877 [2024-12-09 15:41:11.894228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:34209792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.877 [2024-12-09 15:41:11.894256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.877 #36 NEW cov: 12459 ft: 14229 corp: 13/452b lim: 105 exec/s: 0 rss: 74Mb L: 28/59 MS: 1 CrossOver- 00:08:16.877 [2024-12-09 15:41:11.945266] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23809 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.877 [2024-12-09 15:41:11.945294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.877 [2024-12-09 15:41:11.945364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.877 [2024-12-09 15:41:11.945383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.877 [2024-12-09 15:41:11.945471] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.877 [2024-12-09 15:41:11.945487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.877 [2024-12-09 15:41:11.945584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.877 [2024-12-09 15:41:11.945605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.877 #37 NEW cov: 12459 ft: 14835 corp: 14/540b lim: 105 exec/s: 37 rss: 74Mb L: 88/88 MS: 1 InsertRepeatedBytes- 00:08:16.877 [2024-12-09 15:41:12.004989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.877 [2024-12-09 15:41:12.005021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.877 [2024-12-09 15:41:12.005091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748927340941026854 len:56027 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.877 [2024-12-09 15:41:12.005110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.877 #38 NEW cov: 12459 ft: 14878 corp: 15/598b lim: 105 exec/s: 38 rss: 74Mb L: 58/88 MS: 1 InsertRepeatedBytes- 00:08:16.877 [2024-12-09 15:41:12.055195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:34209792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.877 [2024-12-09 15:41:12.055225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.877 [2024-12-09 15:41:12.055335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11821949021847552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.877 [2024-12-09 15:41:12.055354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.877 #39 NEW cov: 12459 ft: 14940 corp: 16/641b lim: 105 exec/s: 39 rss: 74Mb L: 43/88 MS: 1 ChangeByte- 00:08:17.137 [2024-12-09 15:41:12.125548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.137 [2024-12-09 15:41:12.125576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.137 [2024-12-09 15:41:12.125667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.137 [2024-12-09 15:41:12.125684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.137 #45 NEW cov: 12459 ft: 14971 corp: 17/687b lim: 105 exec/s: 45 rss: 74Mb L: 46/88 MS: 1 InsertRepeatedBytes- 00:08:17.137 [2024-12-09 15:41:12.196626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.137 [2024-12-09 15:41:12.196656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.137 [2024-12-09 15:41:12.196765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748927340941026854 len:56027 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.137 [2024-12-09 15:41:12.196782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.137 [2024-12-09 15:41:12.196866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.137 [2024-12-09 15:41:12.196892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.137 [2024-12-09 15:41:12.196982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15770157678700714714 len:56027 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.137 [2024-12-09 15:41:12.197000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.137 [2024-12-09 15:41:12.197087] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:782178115779962406 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.137 [2024-12-09 15:41:12.197105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:17.137 #46 NEW cov: 12459 ft: 15009 corp: 18/792b lim: 105 exec/s: 46 rss: 74Mb L: 105/105 MS: 1 CopyPart- 00:08:17.137 [2024-12-09 15:41:12.266635] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23809 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.137 [2024-12-09 15:41:12.266663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.137 [2024-12-09 15:41:12.266742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.137 [2024-12-09 15:41:12.266761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.137 [2024-12-09 15:41:12.266842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.137 [2024-12-09 15:41:12.266862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.137 [2024-12-09 15:41:12.266965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.137 [2024-12-09 15:41:12.266986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.137 #47 NEW cov: 12459 ft: 15087 corp: 19/880b lim: 105 exec/s: 47 rss: 74Mb L: 88/105 MS: 1 ChangeBinInt- 00:08:17.137 [2024-12-09 15:41:12.336397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:973078528 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.137 [2024-12-09 15:41:12.336425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.137 [2024-12-09 15:41:12.336509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748927340941026854 len:56027 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.137 [2024-12-09 15:41:12.336528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.137 #48 NEW cov: 12459 ft: 15116 corp: 20/938b lim: 105 exec/s: 48 rss: 74Mb L: 58/105 MS: 1 ChangeBinInt- 00:08:17.396 [2024-12-09 15:41:12.386455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:144680345676153346 len:515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.396 [2024-12-09 15:41:12.386483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.396 #49 NEW cov: 12459 ft: 15160 corp: 21/972b lim: 105 exec/s: 49 rss: 74Mb L: 34/105 MS: 1 EraseBytes- 00:08:17.396 [2024-12-09 15:41:12.456660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.396 [2024-12-09 15:41:12.456689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.396 #50 NEW cov: 12459 ft: 15177 corp: 22/1011b lim: 105 exec/s: 50 rss: 74Mb L: 39/105 MS: 1 ChangeByte- 00:08:17.396 [2024-12-09 15:41:12.507035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:34209792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.396 [2024-12-09 15:41:12.507066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.396 #51 NEW cov: 12459 ft: 15198 corp: 23/1052b lim: 105 exec/s: 51 rss: 74Mb L: 41/105 MS: 1 InsertByte- 00:08:17.396 [2024-12-09 15:41:12.557565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:34209792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.396 [2024-12-09 15:41:12.557592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.396 [2024-12-09 15:41:12.557656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.396 [2024-12-09 15:41:12.557676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.396 #57 NEW cov: 12459 ft: 15248 corp: 24/1095b lim: 105 exec/s: 57 rss: 74Mb L: 43/105 MS: 1 ChangeBinInt- 00:08:17.396 [2024-12-09 15:41:12.608622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.396 [2024-12-09 15:41:12.608649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.396 [2024-12-09 15:41:12.608793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748927340941026854 len:56027 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.396 [2024-12-09 15:41:12.608815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.396 [2024-12-09 15:41:12.608894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926570866812454 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.396 [2024-12-09 15:41:12.608916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.396 [2024-12-09 15:41:12.609012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15770157678700714714 len:56027 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.396 [2024-12-09 15:41:12.609031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.396 [2024-12-09 15:41:12.609127] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:782178115779962406 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.396 [2024-12-09 15:41:12.609149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:17.655 #58 NEW cov: 12459 ft: 15281 corp: 25/1200b lim: 105 exec/s: 58 rss: 74Mb L: 105/105 MS: 1 ShuffleBytes- 00:08:17.655 [2024-12-09 15:41:12.678184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3288334336 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.655 [2024-12-09 15:41:12.678212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.655 [2024-12-09 15:41:12.678278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748927340941026854 len:56027 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.655 [2024-12-09 15:41:12.678298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.655 #59 NEW cov: 12459 ft: 15327 corp: 26/1258b lim: 105 exec/s: 59 rss: 74Mb L: 58/105 MS: 1 ChangeBinInt- 00:08:17.655 [2024-12-09 15:41:12.749311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.655 [2024-12-09 15:41:12.749340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.655 [2024-12-09 15:41:12.749476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748927340941026854 len:56027 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.655 [2024-12-09 15:41:12.749496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.655 [2024-12-09 15:41:12.749563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.655 [2024-12-09 15:41:12.749580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.655 [2024-12-09 15:41:12.749679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15770157678700714714 len:56027 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.655 [2024-12-09 15:41:12.749699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.655 [2024-12-09 15:41:12.749790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:782178115779962406 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.655 [2024-12-09 15:41:12.749809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:17.655 #60 NEW cov: 12459 ft: 15390 corp: 27/1363b lim: 105 exec/s: 60 rss: 74Mb L: 105/105 MS: 1 ShuffleBytes- 00:08:17.655 [2024-12-09 15:41:12.799522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.655 [2024-12-09 15:41:12.799549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.655 [2024-12-09 15:41:12.799700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748927340941026854 len:56027 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.655 [2024-12-09 15:41:12.799719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.655 [2024-12-09 15:41:12.799801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926570866812454 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.655 [2024-12-09 15:41:12.799819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.655 [2024-12-09 15:41:12.799916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:15770157678700714714 len:56027 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.655 [2024-12-09 15:41:12.799936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.656 [2024-12-09 15:41:12.800021] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:782178115779962406 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.656 [2024-12-09 15:41:12.800042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:17.656 #61 NEW cov: 12466 ft: 15426 corp: 28/1468b lim: 105 exec/s: 61 rss: 74Mb L: 105/105 MS: 1 ChangeByte- 00:08:17.656 [2024-12-09 15:41:12.869242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.656 [2024-12-09 15:41:12.869272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.656 [2024-12-09 15:41:12.869345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9947 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.656 [2024-12-09 15:41:12.869364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.656 [2024-12-09 15:41:12.869455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926570878655194 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.656 [2024-12-09 15:41:12.869470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.915 #62 NEW cov: 12466 ft: 15707 corp: 29/1549b lim: 105 exec/s: 62 rss: 74Mb L: 81/105 MS: 1 CrossOver- 00:08:17.915 [2024-12-09 15:41:12.939108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567852090918 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.915 [2024-12-09 15:41:12.939138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.915 #63 NEW cov: 12466 ft: 15736 corp: 30/1588b lim: 105 exec/s: 31 rss: 74Mb L: 39/105 MS: 1 ChangeByte- 00:08:17.915 #63 DONE cov: 12466 ft: 15736 corp: 30/1588b lim: 105 exec/s: 31 rss: 74Mb 00:08:17.915 Done 63 runs in 2 second(s) 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:17.915 15:41:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:08:17.915 [2024-12-09 15:41:13.117508] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:17.916 [2024-12-09 15:41:13.117578] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911082 ] 00:08:18.174 [2024-12-09 15:41:13.384630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.433 [2024-12-09 15:41:13.438840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.433 [2024-12-09 15:41:13.497745] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.433 [2024-12-09 15:41:13.513905] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:08:18.433 INFO: Running with entropic power schedule (0xFF, 100). 00:08:18.433 INFO: Seed: 1178990993 00:08:18.433 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:08:18.433 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:08:18.433 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:18.433 INFO: A corpus is not provided, starting from an empty corpus 00:08:18.433 #2 INITED exec/s: 0 rss: 67Mb 00:08:18.433 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:18.433 This may also happen if the target rejected all inputs we tried so far 00:08:18.433 [2024-12-09 15:41:13.569197] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.434 [2024-12-09 15:41:13.569229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.692 NEW_FUNC[1/717]: 0x455aa8 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:08:18.692 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:18.692 #4 NEW cov: 12256 ft: 12249 corp: 2/38b lim: 120 exec/s: 0 rss: 74Mb L: 37/37 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:18.692 [2024-12-09 15:41:13.900109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.692 [2024-12-09 15:41:13.900146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.950 NEW_FUNC[1/1]: 0x19601c8 in nvme_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1190 00:08:18.950 #5 NEW cov: 12372 ft: 12770 corp: 3/75b lim: 120 exec/s: 0 rss: 74Mb L: 37/37 MS: 1 ChangeByte- 00:08:18.950 [2024-12-09 15:41:13.960213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.950 [2024-12-09 15:41:13.960244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.950 #11 NEW cov: 12378 ft: 13105 corp: 4/112b lim: 120 exec/s: 0 rss: 74Mb L: 37/37 MS: 1 ShuffleBytes- 00:08:18.950 [2024-12-09 15:41:14.020314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24157 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.950 [2024-12-09 15:41:14.020345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.950 #12 NEW cov: 12463 ft: 13382 corp: 5/149b lim: 120 exec/s: 0 rss: 74Mb L: 37/37 MS: 1 ChangeBit- 00:08:18.950 [2024-12-09 15:41:14.080926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:62452 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.950 [2024-12-09 15:41:14.080953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.950 [2024-12-09 15:41:14.081003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17578661999652631539 len:62452 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.951 [2024-12-09 15:41:14.081019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.951 [2024-12-09 15:41:14.081071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:17578661999652631539 len:62452 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.951 [2024-12-09 15:41:14.081086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.951 [2024-12-09 15:41:14.081140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17578661999652631539 len:62452 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.951 [2024-12-09 15:41:14.081156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.951 #18 NEW cov: 12463 ft: 14390 corp: 6/262b lim: 120 exec/s: 0 rss: 74Mb L: 113/113 MS: 1 InsertRepeatedBytes- 00:08:18.951 [2024-12-09 15:41:14.120592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24157 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.951 [2024-12-09 15:41:14.120620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.951 #19 NEW cov: 12463 ft: 14479 corp: 7/299b lim: 120 exec/s: 0 rss: 74Mb L: 37/113 MS: 1 CMP- DE: "\033\000\000\000"- 00:08:19.209 [2024-12-09 15:41:14.181079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069481693183 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.181108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.209 [2024-12-09 15:41:14.181151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.181167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.209 [2024-12-09 15:41:14.181221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.181237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.209 #21 NEW cov: 12463 ft: 14887 corp: 8/376b lim: 120 exec/s: 0 rss: 74Mb L: 77/113 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:08:19.209 [2024-12-09 15:41:14.231365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070672875519 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.231394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.209 [2024-12-09 15:41:14.231443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.231459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.209 [2024-12-09 15:41:14.231516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.231532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.209 [2024-12-09 15:41:14.231584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.231600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.209 #24 NEW cov: 12463 ft: 14909 corp: 9/488b lim: 120 exec/s: 0 rss: 74Mb L: 112/113 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:08:19.209 [2024-12-09 15:41:14.271311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9331882295522385758 len:33154 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.271338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.209 [2024-12-09 15:41:14.271383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9331882296111890817 len:33154 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.271399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.209 [2024-12-09 15:41:14.271453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:9331882296111890817 len:33154 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.271469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.209 #29 NEW cov: 12463 ft: 14957 corp: 10/583b lim: 120 exec/s: 0 rss: 74Mb L: 95/113 MS: 5 EraseBytes-ChangeBit-EraseBytes-ChangeBit-InsertRepeatedBytes- 00:08:19.209 [2024-12-09 15:41:14.311549] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.311575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.209 [2024-12-09 15:41:14.311626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.311642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.209 [2024-12-09 15:41:14.311694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.311710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.209 [2024-12-09 15:41:14.311762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.311777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.209 #31 NEW cov: 12463 ft: 15024 corp: 11/691b lim: 120 exec/s: 0 rss: 74Mb L: 108/113 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:19.209 [2024-12-09 15:41:14.351533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:71873565974043 len:24159 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.351561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.209 [2024-12-09 15:41:14.351610] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.351627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.209 [2024-12-09 15:41:14.351686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.351702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.209 #34 NEW cov: 12463 ft: 15062 corp: 12/776b lim: 120 exec/s: 0 rss: 74Mb L: 85/113 MS: 3 EraseBytes-CopyPart-InsertRepeatedBytes- 00:08:19.209 [2024-12-09 15:41:14.411671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:71873565974043 len:24159 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.411699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.209 [2024-12-09 15:41:14.411746] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.209 [2024-12-09 15:41:14.411762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.209 [2024-12-09 15:41:14.411817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.210 [2024-12-09 15:41:14.411834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.469 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:19.469 #35 NEW cov: 12486 ft: 15097 corp: 13/867b lim: 120 exec/s: 0 rss: 75Mb L: 91/113 MS: 1 CopyPart- 00:08:19.469 [2024-12-09 15:41:14.471572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.469 [2024-12-09 15:41:14.471600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.469 #36 NEW cov: 12486 ft: 15160 corp: 14/904b lim: 120 exec/s: 0 rss: 75Mb L: 37/113 MS: 1 CopyPart- 00:08:19.469 [2024-12-09 15:41:14.512130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.469 [2024-12-09 15:41:14.512157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.469 [2024-12-09 15:41:14.512204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.469 [2024-12-09 15:41:14.512220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.469 [2024-12-09 15:41:14.512273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.469 [2024-12-09 15:41:14.512287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.469 [2024-12-09 15:41:14.512340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.469 [2024-12-09 15:41:14.512355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.469 #37 NEW cov: 12486 ft: 15169 corp: 15/1011b lim: 120 exec/s: 0 rss: 75Mb L: 107/113 MS: 1 InsertRepeatedBytes- 00:08:19.469 [2024-12-09 15:41:14.551949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070672875519 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.469 [2024-12-09 15:41:14.551976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.469 [2024-12-09 15:41:14.552039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.469 [2024-12-09 15:41:14.552055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.469 #38 NEW cov: 12486 ft: 15495 corp: 16/1075b lim: 120 exec/s: 38 rss: 75Mb L: 64/113 MS: 1 EraseBytes- 00:08:19.469 [2024-12-09 15:41:14.611962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.469 [2024-12-09 15:41:14.611989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.469 #39 NEW cov: 12486 ft: 15548 corp: 17/1112b lim: 120 exec/s: 39 rss: 75Mb L: 37/113 MS: 1 CopyPart- 00:08:19.469 [2024-12-09 15:41:14.652538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.469 [2024-12-09 15:41:14.652567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.469 [2024-12-09 15:41:14.652618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.469 [2024-12-09 15:41:14.652634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.469 [2024-12-09 15:41:14.652690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.469 [2024-12-09 15:41:14.652707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.469 [2024-12-09 15:41:14.652761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.469 [2024-12-09 15:41:14.652776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.728 #40 NEW cov: 12486 ft: 15588 corp: 18/1223b lim: 120 exec/s: 40 rss: 75Mb L: 111/113 MS: 1 CopyPart- 00:08:19.728 [2024-12-09 15:41:14.712510] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:71873565974043 len:24159 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.728 [2024-12-09 15:41:14.712537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.728 [2024-12-09 15:41:14.712602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6781152745150635614 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.728 [2024-12-09 15:41:14.712619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.728 [2024-12-09 15:41:14.712674] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.728 [2024-12-09 15:41:14.712689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.728 #41 NEW cov: 12486 ft: 15619 corp: 19/1308b lim: 120 exec/s: 41 rss: 75Mb L: 85/113 MS: 1 CopyPart- 00:08:19.728 [2024-12-09 15:41:14.752320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24157 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.728 [2024-12-09 15:41:14.752348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.728 #42 NEW cov: 12486 ft: 15632 corp: 20/1345b lim: 120 exec/s: 42 rss: 75Mb L: 37/113 MS: 1 ChangeBit- 00:08:19.728 [2024-12-09 15:41:14.792882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.728 [2024-12-09 15:41:14.792909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.728 [2024-12-09 15:41:14.792974] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.728 [2024-12-09 15:41:14.792991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.728 [2024-12-09 15:41:14.793044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.728 [2024-12-09 15:41:14.793058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.728 [2024-12-09 15:41:14.793111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.728 [2024-12-09 15:41:14.793127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.728 #43 NEW cov: 12486 ft: 15644 corp: 21/1452b lim: 120 exec/s: 43 rss: 75Mb L: 107/113 MS: 1 ChangeBit- 00:08:19.728 [2024-12-09 15:41:14.852610] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070997827166 len:24159 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.728 [2024-12-09 15:41:14.852637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.728 #44 NEW cov: 12486 ft: 15655 corp: 22/1493b lim: 120 exec/s: 44 rss: 75Mb L: 41/113 MS: 1 CMP- DE: "\377\377\377\377"- 00:08:19.728 [2024-12-09 15:41:14.892711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24157 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.728 [2024-12-09 15:41:14.892739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.728 #45 NEW cov: 12486 ft: 15660 corp: 23/1531b lim: 120 exec/s: 45 rss: 75Mb L: 38/113 MS: 1 InsertByte- 00:08:19.728 [2024-12-09 15:41:14.932813] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6799975843052281438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.728 [2024-12-09 15:41:14.932842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.728 #46 NEW cov: 12486 ft: 15661 corp: 24/1568b lim: 120 exec/s: 46 rss: 75Mb L: 37/113 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:08:19.987 [2024-12-09 15:41:14.973402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.987 [2024-12-09 15:41:14.973429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.987 [2024-12-09 15:41:14.973494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.987 [2024-12-09 15:41:14.973511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.987 [2024-12-09 15:41:14.973561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744072008407807 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.987 [2024-12-09 15:41:14.973577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.987 [2024-12-09 15:41:14.973631] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.987 [2024-12-09 15:41:14.973647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.987 #47 NEW cov: 12486 ft: 15737 corp: 25/1675b lim: 120 exec/s: 47 rss: 75Mb L: 107/113 MS: 1 CrossOver- 00:08:19.987 [2024-12-09 15:41:15.033549] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070672875519 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.987 [2024-12-09 15:41:15.033578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.987 [2024-12-09 15:41:15.033631] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.987 [2024-12-09 15:41:15.033649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.987 [2024-12-09 15:41:15.033701] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.987 [2024-12-09 15:41:15.033718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.987 [2024-12-09 15:41:15.033771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.987 [2024-12-09 15:41:15.033788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.987 #48 NEW cov: 12486 ft: 15770 corp: 26/1787b lim: 120 exec/s: 48 rss: 75Mb L: 112/113 MS: 1 ChangeByte- 00:08:19.987 [2024-12-09 15:41:15.073635] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:62452 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.987 [2024-12-09 15:41:15.073663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.987 [2024-12-09 15:41:15.073715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17578661999652631539 len:62452 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.987 [2024-12-09 15:41:15.073730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.987 [2024-12-09 15:41:15.073782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:17578661999652631539 len:62452 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.987 [2024-12-09 15:41:15.073798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.987 [2024-12-09 15:41:15.073854] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17578661999652631539 len:62452 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.987 [2024-12-09 15:41:15.073885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.987 #49 NEW cov: 12486 ft: 15783 corp: 27/1900b lim: 120 exec/s: 49 rss: 75Mb L: 113/113 MS: 1 ChangeBit- 00:08:19.987 [2024-12-09 15:41:15.133853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:62452 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.987 [2024-12-09 15:41:15.133881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.987 [2024-12-09 15:41:15.133949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17578661999652631539 len:62452 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.987 [2024-12-09 15:41:15.133966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.987 [2024-12-09 15:41:15.134018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:17578661999652631539 len:62452 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.987 [2024-12-09 15:41:15.134035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.987 [2024-12-09 15:41:15.134093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17578661999652631539 len:62452 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.987 [2024-12-09 15:41:15.134108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.987 #50 NEW cov: 12486 ft: 15792 corp: 28/2019b lim: 120 exec/s: 50 rss: 75Mb L: 119/119 MS: 1 CopyPart- 00:08:19.987 [2024-12-09 15:41:15.193541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.987 [2024-12-09 15:41:15.193568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.246 #51 NEW cov: 12486 ft: 15798 corp: 29/2058b lim: 120 exec/s: 51 rss: 75Mb L: 39/119 MS: 1 InsertByte- 00:08:20.246 [2024-12-09 15:41:15.254186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2130706432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.246 [2024-12-09 15:41:15.254213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.246 [2024-12-09 15:41:15.254278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.246 [2024-12-09 15:41:15.254294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.246 [2024-12-09 15:41:15.254347] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.246 [2024-12-09 15:41:15.254362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.246 [2024-12-09 15:41:15.254416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.247 [2024-12-09 15:41:15.254432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.247 [2024-12-09 15:41:15.294262] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2130706432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.247 [2024-12-09 15:41:15.294289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.247 [2024-12-09 15:41:15.294342] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.247 [2024-12-09 15:41:15.294358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.247 [2024-12-09 15:41:15.294427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.247 [2024-12-09 15:41:15.294444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.247 [2024-12-09 15:41:15.294498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2172715008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.247 [2024-12-09 15:41:15.294513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.247 #56 NEW cov: 12486 ft: 15800 corp: 30/2167b lim: 120 exec/s: 56 rss: 75Mb L: 109/119 MS: 5 ChangeBit-InsertByte-ChangeBit-InsertRepeatedBytes-InsertRepeatedBytes- 00:08:20.247 [2024-12-09 15:41:15.334239] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:71873565974043 len:24159 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.247 [2024-12-09 15:41:15.334265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.247 [2024-12-09 15:41:15.334331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.247 [2024-12-09 15:41:15.334351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.247 [2024-12-09 15:41:15.334404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.247 [2024-12-09 15:41:15.334418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.247 #57 NEW cov: 12486 ft: 15811 corp: 31/2258b lim: 120 exec/s: 57 rss: 75Mb L: 91/119 MS: 1 ChangeByte- 00:08:20.247 [2024-12-09 15:41:15.394077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.247 [2024-12-09 15:41:15.394104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.247 #58 NEW cov: 12486 ft: 15869 corp: 32/2295b lim: 120 exec/s: 58 rss: 75Mb L: 37/119 MS: 1 ChangeByte- 00:08:20.247 [2024-12-09 15:41:15.434661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.247 [2024-12-09 15:41:15.434688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.247 [2024-12-09 15:41:15.434740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.247 [2024-12-09 15:41:15.434756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.247 [2024-12-09 15:41:15.434809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744072008407807 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.247 [2024-12-09 15:41:15.434825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.247 [2024-12-09 15:41:15.434886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11140386617053741210 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.247 [2024-12-09 15:41:15.434902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.507 #59 NEW cov: 12486 ft: 15879 corp: 33/2402b lim: 120 exec/s: 59 rss: 75Mb L: 107/119 MS: 1 CMP- DE: "\001\000"- 00:08:20.507 [2024-12-09 15:41:15.494796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.507 [2024-12-09 15:41:15.494824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.507 [2024-12-09 15:41:15.494899] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.507 [2024-12-09 15:41:15.494916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.507 [2024-12-09 15:41:15.494968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.507 [2024-12-09 15:41:15.494983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.507 [2024-12-09 15:41:15.495038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.507 [2024-12-09 15:41:15.495054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.507 #60 NEW cov: 12486 ft: 15894 corp: 34/2516b lim: 120 exec/s: 60 rss: 75Mb L: 114/119 MS: 1 CrossOver- 00:08:20.507 [2024-12-09 15:41:15.534505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:9473 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.507 [2024-12-09 15:41:15.534534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.507 #61 NEW cov: 12486 ft: 15910 corp: 35/2553b lim: 120 exec/s: 30 rss: 75Mb L: 37/119 MS: 1 ChangeBinInt- 00:08:20.507 #61 DONE cov: 12486 ft: 15910 corp: 35/2553b lim: 120 exec/s: 30 rss: 75Mb 00:08:20.507 ###### Recommended dictionary. ###### 00:08:20.507 "\033\000\000\000" # Uses: 0 00:08:20.507 "\377\377\377\377" # Uses: 0 00:08:20.507 "\000\000\000\000\000\000\000\000" # Uses: 0 00:08:20.507 "\001\000" # Uses: 0 00:08:20.507 ###### End of recommended dictionary. ###### 00:08:20.507 Done 61 runs in 2 second(s) 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:20.507 15:41:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:08:20.507 [2024-12-09 15:41:15.710681] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:20.507 [2024-12-09 15:41:15.710755] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911397 ] 00:08:20.766 [2024-12-09 15:41:15.983232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.025 [2024-12-09 15:41:16.037915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.025 [2024-12-09 15:41:16.097186] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.025 [2024-12-09 15:41:16.113334] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:08:21.025 INFO: Running with entropic power schedule (0xFF, 100). 00:08:21.025 INFO: Seed: 3775013755 00:08:21.025 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:08:21.025 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:08:21.025 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:21.025 INFO: A corpus is not provided, starting from an empty corpus 00:08:21.025 #2 INITED exec/s: 0 rss: 66Mb 00:08:21.025 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:21.025 This may also happen if the target rejected all inputs we tried so far 00:08:21.025 [2024-12-09 15:41:16.161295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:21.025 [2024-12-09 15:41:16.161325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.025 [2024-12-09 15:41:16.161377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:21.025 [2024-12-09 15:41:16.161391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.284 NEW_FUNC[1/716]: 0x459398 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:08:21.284 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:21.284 #7 NEW cov: 12202 ft: 12194 corp: 2/42b lim: 100 exec/s: 0 rss: 73Mb L: 41/41 MS: 5 ChangeBit-CopyPart-ChangeBit-ChangeBit-InsertRepeatedBytes- 00:08:21.284 [2024-12-09 15:41:16.482255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:21.284 [2024-12-09 15:41:16.482303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.284 [2024-12-09 15:41:16.482384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:21.284 [2024-12-09 15:41:16.482404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.543 #8 NEW cov: 12315 ft: 12835 corp: 3/83b lim: 100 exec/s: 0 rss: 74Mb L: 41/41 MS: 1 CMP- DE: "\000\000\000\000"- 00:08:21.543 [2024-12-09 15:41:16.542276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:21.543 [2024-12-09 15:41:16.542303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.543 [2024-12-09 15:41:16.542357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:21.543 [2024-12-09 15:41:16.542373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.543 #14 NEW cov: 12321 ft: 13030 corp: 4/124b lim: 100 exec/s: 0 rss: 74Mb L: 41/41 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:08:21.543 [2024-12-09 15:41:16.602271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:21.543 [2024-12-09 15:41:16.602300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.543 #16 NEW cov: 12406 ft: 13638 corp: 5/162b lim: 100 exec/s: 0 rss: 74Mb L: 38/41 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:21.543 [2024-12-09 15:41:16.642507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:21.543 [2024-12-09 15:41:16.642533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.543 [2024-12-09 15:41:16.642588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:21.543 [2024-12-09 15:41:16.642602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.543 #17 NEW cov: 12406 ft: 13771 corp: 6/207b lim: 100 exec/s: 0 rss: 74Mb L: 45/45 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:08:21.543 [2024-12-09 15:41:16.682618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:21.543 [2024-12-09 15:41:16.682647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.543 [2024-12-09 15:41:16.682704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:21.543 [2024-12-09 15:41:16.682719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.543 #18 NEW cov: 12406 ft: 13826 corp: 7/265b lim: 100 exec/s: 0 rss: 74Mb L: 58/58 MS: 1 InsertRepeatedBytes- 00:08:21.543 [2024-12-09 15:41:16.742787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:21.543 [2024-12-09 15:41:16.742813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.543 [2024-12-09 15:41:16.742873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:21.543 [2024-12-09 15:41:16.742887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.543 #19 NEW cov: 12406 ft: 13858 corp: 8/306b lim: 100 exec/s: 0 rss: 74Mb L: 41/58 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:08:21.802 [2024-12-09 15:41:16.782953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:21.802 [2024-12-09 15:41:16.782979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.802 [2024-12-09 15:41:16.783017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:21.802 [2024-12-09 15:41:16.783031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.802 #20 NEW cov: 12406 ft: 13949 corp: 9/364b lim: 100 exec/s: 0 rss: 74Mb L: 58/58 MS: 1 ChangeBit- 00:08:21.802 [2024-12-09 15:41:16.842964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:21.802 [2024-12-09 15:41:16.842990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.802 #21 NEW cov: 12406 ft: 14012 corp: 10/394b lim: 100 exec/s: 0 rss: 74Mb L: 30/58 MS: 1 EraseBytes- 00:08:21.802 [2024-12-09 15:41:16.883205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:21.802 [2024-12-09 15:41:16.883232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.802 [2024-12-09 15:41:16.883293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:21.802 [2024-12-09 15:41:16.883308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.802 #22 NEW cov: 12406 ft: 14089 corp: 11/439b lim: 100 exec/s: 0 rss: 74Mb L: 45/58 MS: 1 CopyPart- 00:08:21.803 [2024-12-09 15:41:16.943348] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:21.803 [2024-12-09 15:41:16.943376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.803 [2024-12-09 15:41:16.943430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:21.803 [2024-12-09 15:41:16.943445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.803 #23 NEW cov: 12406 ft: 14158 corp: 12/480b lim: 100 exec/s: 0 rss: 74Mb L: 41/58 MS: 1 ChangeBinInt- 00:08:21.803 [2024-12-09 15:41:16.983502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:21.803 [2024-12-09 15:41:16.983529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.803 [2024-12-09 15:41:16.983579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:21.803 [2024-12-09 15:41:16.983595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.803 #24 NEW cov: 12406 ft: 14193 corp: 13/521b lim: 100 exec/s: 0 rss: 74Mb L: 41/58 MS: 1 CMP- DE: "\013\000"- 00:08:22.062 [2024-12-09 15:41:17.043668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.062 [2024-12-09 15:41:17.043694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.062 [2024-12-09 15:41:17.043731] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.062 [2024-12-09 15:41:17.043745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.062 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:22.062 #25 NEW cov: 12429 ft: 14260 corp: 14/579b lim: 100 exec/s: 0 rss: 74Mb L: 58/58 MS: 1 PersAutoDict- DE: "\013\000"- 00:08:22.062 [2024-12-09 15:41:17.083641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.062 [2024-12-09 15:41:17.083668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.062 #26 NEW cov: 12429 ft: 14303 corp: 15/617b lim: 100 exec/s: 0 rss: 74Mb L: 38/58 MS: 1 ShuffleBytes- 00:08:22.062 [2024-12-09 15:41:17.143822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.062 [2024-12-09 15:41:17.143854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.062 #27 NEW cov: 12429 ft: 14409 corp: 16/647b lim: 100 exec/s: 27 rss: 74Mb L: 30/58 MS: 1 ChangeBit- 00:08:22.062 [2024-12-09 15:41:17.204121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.062 [2024-12-09 15:41:17.204148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.062 [2024-12-09 15:41:17.204212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.062 [2024-12-09 15:41:17.204228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.062 #33 NEW cov: 12429 ft: 14485 corp: 17/692b lim: 100 exec/s: 33 rss: 74Mb L: 45/58 MS: 1 ChangeBinInt- 00:08:22.062 [2024-12-09 15:41:17.264228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.062 [2024-12-09 15:41:17.264254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.062 [2024-12-09 15:41:17.264307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.062 [2024-12-09 15:41:17.264322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.062 #34 NEW cov: 12429 ft: 14494 corp: 18/748b lim: 100 exec/s: 34 rss: 74Mb L: 56/58 MS: 1 CopyPart- 00:08:22.321 [2024-12-09 15:41:17.304342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.321 [2024-12-09 15:41:17.304368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.321 [2024-12-09 15:41:17.304407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.321 [2024-12-09 15:41:17.304421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.321 #35 NEW cov: 12429 ft: 14528 corp: 19/794b lim: 100 exec/s: 35 rss: 74Mb L: 46/58 MS: 1 InsertByte- 00:08:22.321 [2024-12-09 15:41:17.344475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.321 [2024-12-09 15:41:17.344505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.321 [2024-12-09 15:41:17.344576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.321 [2024-12-09 15:41:17.344591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.321 #36 NEW cov: 12429 ft: 14584 corp: 20/851b lim: 100 exec/s: 36 rss: 75Mb L: 57/58 MS: 1 InsertByte- 00:08:22.321 [2024-12-09 15:41:17.404645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.321 [2024-12-09 15:41:17.404673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.321 [2024-12-09 15:41:17.404712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.321 [2024-12-09 15:41:17.404727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.321 #37 NEW cov: 12429 ft: 14715 corp: 21/892b lim: 100 exec/s: 37 rss: 75Mb L: 41/58 MS: 1 ShuffleBytes- 00:08:22.321 [2024-12-09 15:41:17.444620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.321 [2024-12-09 15:41:17.444646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.321 #38 NEW cov: 12429 ft: 14752 corp: 22/930b lim: 100 exec/s: 38 rss: 75Mb L: 38/58 MS: 1 ChangeByte- 00:08:22.321 [2024-12-09 15:41:17.504961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.321 [2024-12-09 15:41:17.504998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.321 [2024-12-09 15:41:17.505037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.321 [2024-12-09 15:41:17.505052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.321 #39 NEW cov: 12429 ft: 14784 corp: 23/975b lim: 100 exec/s: 39 rss: 75Mb L: 45/58 MS: 1 ChangeBit- 00:08:22.321 [2024-12-09 15:41:17.545017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.321 [2024-12-09 15:41:17.545043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.321 [2024-12-09 15:41:17.545084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.321 [2024-12-09 15:41:17.545099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.581 #40 NEW cov: 12429 ft: 14803 corp: 24/1020b lim: 100 exec/s: 40 rss: 75Mb L: 45/58 MS: 1 ChangeBinInt- 00:08:22.581 [2024-12-09 15:41:17.585188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.581 [2024-12-09 15:41:17.585215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.581 [2024-12-09 15:41:17.585259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.581 [2024-12-09 15:41:17.585282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.581 #41 NEW cov: 12429 ft: 14859 corp: 25/1077b lim: 100 exec/s: 41 rss: 75Mb L: 57/58 MS: 1 ShuffleBytes- 00:08:22.581 [2024-12-09 15:41:17.645416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.581 [2024-12-09 15:41:17.645442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.581 [2024-12-09 15:41:17.645510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.581 [2024-12-09 15:41:17.645528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.581 [2024-12-09 15:41:17.645586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:22.581 [2024-12-09 15:41:17.645600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.581 #42 NEW cov: 12429 ft: 15164 corp: 26/1143b lim: 100 exec/s: 42 rss: 75Mb L: 66/66 MS: 1 CopyPart- 00:08:22.581 [2024-12-09 15:41:17.705487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.581 [2024-12-09 15:41:17.705512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.581 [2024-12-09 15:41:17.705549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.581 [2024-12-09 15:41:17.705563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.581 #43 NEW cov: 12429 ft: 15175 corp: 27/1192b lim: 100 exec/s: 43 rss: 75Mb L: 49/66 MS: 1 InsertRepeatedBytes- 00:08:22.581 [2024-12-09 15:41:17.765799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.581 [2024-12-09 15:41:17.765825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.581 [2024-12-09 15:41:17.765897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.581 [2024-12-09 15:41:17.765914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.581 [2024-12-09 15:41:17.765971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:22.581 [2024-12-09 15:41:17.765985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.840 #44 NEW cov: 12429 ft: 15204 corp: 28/1258b lim: 100 exec/s: 44 rss: 75Mb L: 66/66 MS: 1 ShuffleBytes- 00:08:22.840 [2024-12-09 15:41:17.825819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.840 [2024-12-09 15:41:17.825849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.840 [2024-12-09 15:41:17.825887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.840 [2024-12-09 15:41:17.825902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.840 #45 NEW cov: 12429 ft: 15208 corp: 29/1303b lim: 100 exec/s: 45 rss: 75Mb L: 45/66 MS: 1 PersAutoDict- DE: "\013\000"- 00:08:22.840 [2024-12-09 15:41:17.885997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.840 [2024-12-09 15:41:17.886023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.840 [2024-12-09 15:41:17.886075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.840 [2024-12-09 15:41:17.886091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.840 #46 NEW cov: 12429 ft: 15260 corp: 30/1346b lim: 100 exec/s: 46 rss: 75Mb L: 43/66 MS: 1 EraseBytes- 00:08:22.840 [2024-12-09 15:41:17.926000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.840 [2024-12-09 15:41:17.926026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.840 #47 NEW cov: 12429 ft: 15267 corp: 31/1376b lim: 100 exec/s: 47 rss: 75Mb L: 30/66 MS: 1 EraseBytes- 00:08:22.840 [2024-12-09 15:41:17.986230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.840 [2024-12-09 15:41:17.986258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.840 [2024-12-09 15:41:17.986319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.840 [2024-12-09 15:41:17.986335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.840 #48 NEW cov: 12429 ft: 15295 corp: 32/1417b lim: 100 exec/s: 48 rss: 75Mb L: 41/66 MS: 1 ChangeByte- 00:08:22.840 [2024-12-09 15:41:18.026273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.840 [2024-12-09 15:41:18.026299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.100 #49 NEW cov: 12429 ft: 15297 corp: 33/1448b lim: 100 exec/s: 49 rss: 75Mb L: 31/66 MS: 1 InsertByte- 00:08:23.100 [2024-12-09 15:41:18.086510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:23.100 [2024-12-09 15:41:18.086537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.100 [2024-12-09 15:41:18.086588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:23.100 [2024-12-09 15:41:18.086603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.100 #50 NEW cov: 12429 ft: 15333 corp: 34/1504b lim: 100 exec/s: 50 rss: 75Mb L: 56/66 MS: 1 ChangeByte- 00:08:23.100 [2024-12-09 15:41:18.126511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:23.100 [2024-12-09 15:41:18.126538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.100 #51 NEW cov: 12429 ft: 15359 corp: 35/1534b lim: 100 exec/s: 25 rss: 75Mb L: 30/66 MS: 1 ChangeByte- 00:08:23.100 #51 DONE cov: 12429 ft: 15359 corp: 35/1534b lim: 100 exec/s: 25 rss: 75Mb 00:08:23.100 ###### Recommended dictionary. ###### 00:08:23.100 "\000\000\000\000" # Uses: 3 00:08:23.100 "\013\000" # Uses: 2 00:08:23.100 ###### End of recommended dictionary. ###### 00:08:23.100 Done 51 runs in 2 second(s) 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:23.100 15:41:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:08:23.100 [2024-12-09 15:41:18.323801] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:23.100 [2024-12-09 15:41:18.323880] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911721 ] 00:08:23.669 [2024-12-09 15:41:18.594411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.669 [2024-12-09 15:41:18.644131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.669 [2024-12-09 15:41:18.703216] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.669 [2024-12-09 15:41:18.719361] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:08:23.669 INFO: Running with entropic power schedule (0xFF, 100). 00:08:23.669 INFO: Seed: 2088033486 00:08:23.669 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:08:23.669 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:08:23.669 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:23.669 INFO: A corpus is not provided, starting from an empty corpus 00:08:23.669 #2 INITED exec/s: 0 rss: 66Mb 00:08:23.669 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:23.669 This may also happen if the target rejected all inputs we tried so far 00:08:23.669 [2024-12-09 15:41:18.774887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:23.669 [2024-12-09 15:41:18.774921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.669 [2024-12-09 15:41:18.774984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:23.669 [2024-12-09 15:41:18.775001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.669 [2024-12-09 15:41:18.775057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:23.669 [2024-12-09 15:41:18.775073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.928 NEW_FUNC[1/716]: 0x45c358 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:08:23.928 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:23.928 #31 NEW cov: 12180 ft: 12179 corp: 2/33b lim: 50 exec/s: 0 rss: 73Mb L: 32/32 MS: 4 ChangeBit-CMP-EraseBytes-InsertRepeatedBytes- DE: "\000\000\000\000\000\000\000\000"- 00:08:23.928 [2024-12-09 15:41:19.095777] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:23.928 [2024-12-09 15:41:19.095822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.928 [2024-12-09 15:41:19.095897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:23.928 [2024-12-09 15:41:19.095918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.928 [2024-12-09 15:41:19.095979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:23.928 [2024-12-09 15:41:19.095997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.928 #32 NEW cov: 12293 ft: 12750 corp: 3/65b lim: 50 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:24.187 [2024-12-09 15:41:19.155657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:24.187 [2024-12-09 15:41:19.155686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.187 #38 NEW cov: 12299 ft: 13389 corp: 4/84b lim: 50 exec/s: 0 rss: 73Mb L: 19/32 MS: 1 EraseBytes- 00:08:24.187 [2024-12-09 15:41:19.195960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:24.187 [2024-12-09 15:41:19.195989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.187 [2024-12-09 15:41:19.196042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:24.187 [2024-12-09 15:41:19.196060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.187 [2024-12-09 15:41:19.196114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:24.187 [2024-12-09 15:41:19.196130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.187 #39 NEW cov: 12384 ft: 13616 corp: 5/116b lim: 50 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:24.187 [2024-12-09 15:41:19.256217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:24.187 [2024-12-09 15:41:19.256244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.187 [2024-12-09 15:41:19.256309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:24.187 [2024-12-09 15:41:19.256325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.187 [2024-12-09 15:41:19.256377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4294967040 len:1 00:08:24.187 [2024-12-09 15:41:19.256394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.187 [2024-12-09 15:41:19.256446] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:24.187 [2024-12-09 15:41:19.256461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.187 #40 NEW cov: 12384 ft: 14013 corp: 6/161b lim: 50 exec/s: 0 rss: 73Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:08:24.187 [2024-12-09 15:41:19.316285] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:24.187 [2024-12-09 15:41:19.316312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.187 [2024-12-09 15:41:19.316376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11075584 len:1 00:08:24.187 [2024-12-09 15:41:19.316393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.187 [2024-12-09 15:41:19.316448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:24.187 [2024-12-09 15:41:19.316464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.187 #41 NEW cov: 12384 ft: 14101 corp: 7/193b lim: 50 exec/s: 0 rss: 74Mb L: 32/45 MS: 1 ChangeByte- 00:08:24.187 [2024-12-09 15:41:19.356387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:24.187 [2024-12-09 15:41:19.356420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.187 [2024-12-09 15:41:19.356463] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:24.187 [2024-12-09 15:41:19.356479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.187 [2024-12-09 15:41:19.356551] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:24.187 [2024-12-09 15:41:19.356567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.187 #42 NEW cov: 12384 ft: 14194 corp: 8/229b lim: 50 exec/s: 0 rss: 74Mb L: 36/45 MS: 1 InsertRepeatedBytes- 00:08:24.187 [2024-12-09 15:41:19.396299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:24.187 [2024-12-09 15:41:19.396328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.447 #43 NEW cov: 12384 ft: 14259 corp: 9/248b lim: 50 exec/s: 0 rss: 74Mb L: 19/45 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:24.447 [2024-12-09 15:41:19.456783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:24.447 [2024-12-09 15:41:19.456810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.447 [2024-12-09 15:41:19.456883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:281470681743360 len:65536 00:08:24.447 [2024-12-09 15:41:19.456900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.447 [2024-12-09 15:41:19.456952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4294967295 len:1 00:08:24.447 [2024-12-09 15:41:19.456971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.447 [2024-12-09 15:41:19.457029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:139 00:08:24.447 [2024-12-09 15:41:19.457046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.447 #44 NEW cov: 12384 ft: 14290 corp: 10/288b lim: 50 exec/s: 0 rss: 74Mb L: 40/45 MS: 1 InsertRepeatedBytes- 00:08:24.447 [2024-12-09 15:41:19.496708] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:24.447 [2024-12-09 15:41:19.496736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.447 [2024-12-09 15:41:19.496804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:65280 len:1 00:08:24.447 [2024-12-09 15:41:19.496821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.447 #45 NEW cov: 12384 ft: 14600 corp: 11/317b lim: 50 exec/s: 0 rss: 74Mb L: 29/45 MS: 1 EraseBytes- 00:08:24.447 [2024-12-09 15:41:19.556945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:23862 00:08:24.447 [2024-12-09 15:41:19.556974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.447 [2024-12-09 15:41:19.557026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5908722711785252756 len:1 00:08:24.447 [2024-12-09 15:41:19.557048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.447 [2024-12-09 15:41:19.557109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:24.447 [2024-12-09 15:41:19.557125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.447 #46 NEW cov: 12384 ft: 14632 corp: 12/349b lim: 50 exec/s: 0 rss: 74Mb L: 32/45 MS: 1 CMP- DE: "]5(>'\224R\000"- 00:08:24.447 [2024-12-09 15:41:19.597041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6781890799337472 len:6169 00:08:24.447 [2024-12-09 15:41:19.597070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.447 [2024-12-09 15:41:19.597109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1095216660480 len:1 00:08:24.447 [2024-12-09 15:41:19.597125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.447 [2024-12-09 15:41:19.597179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:24.447 [2024-12-09 15:41:19.597193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.447 #47 NEW cov: 12384 ft: 14722 corp: 13/383b lim: 50 exec/s: 0 rss: 74Mb L: 34/45 MS: 1 InsertRepeatedBytes- 00:08:24.447 [2024-12-09 15:41:19.657343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:24.447 [2024-12-09 15:41:19.657371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.447 [2024-12-09 15:41:19.657423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:281470681743360 len:65536 00:08:24.447 [2024-12-09 15:41:19.657438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.447 [2024-12-09 15:41:19.657505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4294967295 len:1 00:08:24.447 [2024-12-09 15:41:19.657521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.447 [2024-12-09 15:41:19.657577] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:24.447 [2024-12-09 15:41:19.657592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.707 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:24.707 #48 NEW cov: 12407 ft: 14757 corp: 14/431b lim: 50 exec/s: 0 rss: 74Mb L: 48/48 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:24.707 [2024-12-09 15:41:19.697094] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:63 len:1 00:08:24.707 [2024-12-09 15:41:19.697122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.707 #49 NEW cov: 12407 ft: 14940 corp: 15/450b lim: 50 exec/s: 0 rss: 74Mb L: 19/48 MS: 1 ChangeByte- 00:08:24.707 [2024-12-09 15:41:19.737478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6781890799337472 len:6169 00:08:24.707 [2024-12-09 15:41:19.737505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.707 [2024-12-09 15:41:19.737548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1095216660480 len:1 00:08:24.707 [2024-12-09 15:41:19.737564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.707 [2024-12-09 15:41:19.737615] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:24.707 [2024-12-09 15:41:19.737635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.707 #50 NEW cov: 12407 ft: 14998 corp: 16/480b lim: 50 exec/s: 50 rss: 74Mb L: 30/48 MS: 1 CrossOver- 00:08:24.707 [2024-12-09 15:41:19.797724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:24.707 [2024-12-09 15:41:19.797751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.707 [2024-12-09 15:41:19.797804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:65280 len:1 00:08:24.707 [2024-12-09 15:41:19.797820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.707 [2024-12-09 15:41:19.797869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:24.707 [2024-12-09 15:41:19.797885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.707 [2024-12-09 15:41:19.797938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:24.707 [2024-12-09 15:41:19.797954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.707 #51 NEW cov: 12407 ft: 15044 corp: 17/523b lim: 50 exec/s: 51 rss: 74Mb L: 43/48 MS: 1 CrossOver- 00:08:24.707 [2024-12-09 15:41:19.837900] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:24.707 [2024-12-09 15:41:19.837928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.707 [2024-12-09 15:41:19.837981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:281470681743360 len:23862 00:08:24.707 [2024-12-09 15:41:19.837998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.707 [2024-12-09 15:41:19.838051] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5908722711785252756 len:1 00:08:24.707 [2024-12-09 15:41:19.838067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.707 [2024-12-09 15:41:19.838119] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:139 00:08:24.707 [2024-12-09 15:41:19.838135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.707 #52 NEW cov: 12407 ft: 15071 corp: 18/563b lim: 50 exec/s: 52 rss: 74Mb L: 40/48 MS: 1 PersAutoDict- DE: "]5(>'\224R\000"- 00:08:24.707 [2024-12-09 15:41:19.877829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:438086664255 len:26215 00:08:24.707 [2024-12-09 15:41:19.877862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.707 [2024-12-09 15:41:19.877910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:7378697629483820646 len:26215 00:08:24.707 [2024-12-09 15:41:19.877926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.708 [2024-12-09 15:41:19.877980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:7378585041211123302 len:1 00:08:24.708 [2024-12-09 15:41:19.877995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.708 #53 NEW cov: 12407 ft: 15165 corp: 19/601b lim: 50 exec/s: 53 rss: 74Mb L: 38/48 MS: 1 InsertRepeatedBytes- 00:08:24.967 [2024-12-09 15:41:19.937948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:24.967 [2024-12-09 15:41:19.937980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.967 [2024-12-09 15:41:19.938047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:24.967 [2024-12-09 15:41:19.938063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.967 #54 NEW cov: 12407 ft: 15206 corp: 20/630b lim: 50 exec/s: 54 rss: 74Mb L: 29/48 MS: 1 EraseBytes- 00:08:24.967 [2024-12-09 15:41:19.978193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2147483648 len:1 00:08:24.967 [2024-12-09 15:41:19.978222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.967 [2024-12-09 15:41:19.978270] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:24.967 [2024-12-09 15:41:19.978286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.967 [2024-12-09 15:41:19.978338] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:24.967 [2024-12-09 15:41:19.978355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.967 #55 NEW cov: 12407 ft: 15257 corp: 21/666b lim: 50 exec/s: 55 rss: 74Mb L: 36/48 MS: 1 ChangeBit- 00:08:24.967 [2024-12-09 15:41:20.038475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:24.967 [2024-12-09 15:41:20.038506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.967 [2024-12-09 15:41:20.038552] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:24.967 [2024-12-09 15:41:20.038568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.967 [2024-12-09 15:41:20.038622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6716318666046046208 len:10133 00:08:24.967 [2024-12-09 15:41:20.038638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.967 [2024-12-09 15:41:20.038692] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1375731712 len:139 00:08:24.967 [2024-12-09 15:41:20.038708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.967 #56 NEW cov: 12407 ft: 15276 corp: 22/706b lim: 50 exec/s: 56 rss: 74Mb L: 40/48 MS: 1 PersAutoDict- DE: "]5(>'\224R\000"- 00:08:24.967 [2024-12-09 15:41:20.078311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:438086664255 len:26215 00:08:24.967 [2024-12-09 15:41:20.078342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.967 [2024-12-09 15:41:20.078385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:7378697191397156454 len:139 00:08:24.967 [2024-12-09 15:41:20.078402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.967 #57 NEW cov: 12407 ft: 15316 corp: 23/726b lim: 50 exec/s: 57 rss: 74Mb L: 20/48 MS: 1 EraseBytes- 00:08:24.967 [2024-12-09 15:41:20.138725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:24.967 [2024-12-09 15:41:20.138752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.967 [2024-12-09 15:41:20.138796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:24.967 [2024-12-09 15:41:20.138812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.967 [2024-12-09 15:41:20.138876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:24.967 [2024-12-09 15:41:20.138892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.967 [2024-12-09 15:41:20.138947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1375731712 len:139 00:08:24.967 [2024-12-09 15:41:20.138964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.967 #58 NEW cov: 12407 ft: 15325 corp: 24/766b lim: 50 exec/s: 58 rss: 74Mb L: 40/48 MS: 1 CopyPart- 00:08:25.227 [2024-12-09 15:41:20.198918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:25.227 [2024-12-09 15:41:20.198946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.227 [2024-12-09 15:41:20.198997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:281470681743360 len:65536 00:08:25.227 [2024-12-09 15:41:20.199013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.227 [2024-12-09 15:41:20.199065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4294967295 len:1 00:08:25.227 [2024-12-09 15:41:20.199098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.227 [2024-12-09 15:41:20.199150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:25.227 [2024-12-09 15:41:20.199166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.227 #59 NEW cov: 12407 ft: 15336 corp: 25/814b lim: 50 exec/s: 59 rss: 74Mb L: 48/48 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:25.227 [2024-12-09 15:41:20.259105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4398046511104 len:1 00:08:25.227 [2024-12-09 15:41:20.259132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.227 [2024-12-09 15:41:20.259168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:281470681743360 len:65536 00:08:25.227 [2024-12-09 15:41:20.259183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.227 [2024-12-09 15:41:20.259235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4294967295 len:1 00:08:25.227 [2024-12-09 15:41:20.259252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.227 [2024-12-09 15:41:20.259305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:25.227 [2024-12-09 15:41:20.259320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.227 #60 NEW cov: 12407 ft: 15401 corp: 26/862b lim: 50 exec/s: 60 rss: 74Mb L: 48/48 MS: 1 ChangeBit- 00:08:25.227 [2024-12-09 15:41:20.299061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:438120218687 len:26215 00:08:25.227 [2024-12-09 15:41:20.299104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.227 [2024-12-09 15:41:20.299149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:7378697629483820646 len:26215 00:08:25.227 [2024-12-09 15:41:20.299164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.227 [2024-12-09 15:41:20.299218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:7378585041211123302 len:1 00:08:25.227 [2024-12-09 15:41:20.299234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.227 #61 NEW cov: 12407 ft: 15433 corp: 27/900b lim: 50 exec/s: 61 rss: 74Mb L: 38/48 MS: 1 ChangeBit- 00:08:25.227 [2024-12-09 15:41:20.339304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:25.227 [2024-12-09 15:41:20.339331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.227 [2024-12-09 15:41:20.339398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:281470681743360 len:23862 00:08:25.227 [2024-12-09 15:41:20.339415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.227 [2024-12-09 15:41:20.339469] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5908727109831763860 len:1 00:08:25.227 [2024-12-09 15:41:20.339484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.227 [2024-12-09 15:41:20.339538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:139 00:08:25.227 [2024-12-09 15:41:20.339553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.227 #62 NEW cov: 12407 ft: 15446 corp: 28/940b lim: 50 exec/s: 62 rss: 75Mb L: 40/48 MS: 1 ChangeBit- 00:08:25.227 [2024-12-09 15:41:20.399330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6781890799337472 len:6169 00:08:25.227 [2024-12-09 15:41:20.399358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.227 [2024-12-09 15:41:20.399409] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1095753531392 len:1 00:08:25.227 [2024-12-09 15:41:20.399425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.227 [2024-12-09 15:41:20.399478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:25.227 [2024-12-09 15:41:20.399494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.227 #63 NEW cov: 12407 ft: 15456 corp: 29/970b lim: 50 exec/s: 63 rss: 75Mb L: 30/48 MS: 1 ChangeBit- 00:08:25.487 [2024-12-09 15:41:20.459513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:25.487 [2024-12-09 15:41:20.459540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.487 [2024-12-09 15:41:20.459602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11075584 len:1 00:08:25.487 [2024-12-09 15:41:20.459618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.487 [2024-12-09 15:41:20.459675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:25.487 [2024-12-09 15:41:20.459691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.487 #64 NEW cov: 12407 ft: 15458 corp: 30/1009b lim: 50 exec/s: 64 rss: 75Mb L: 39/48 MS: 1 InsertRepeatedBytes- 00:08:25.487 [2024-12-09 15:41:20.519667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:25.487 [2024-12-09 15:41:20.519696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.487 [2024-12-09 15:41:20.519744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2835349504 len:1 00:08:25.487 [2024-12-09 15:41:20.519761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.487 [2024-12-09 15:41:20.519814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:25.487 [2024-12-09 15:41:20.519831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.487 #65 NEW cov: 12407 ft: 15497 corp: 31/1048b lim: 50 exec/s: 65 rss: 75Mb L: 39/48 MS: 1 ShuffleBytes- 00:08:25.487 [2024-12-09 15:41:20.579608] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:25.487 [2024-12-09 15:41:20.579636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.487 #66 NEW cov: 12407 ft: 15507 corp: 32/1067b lim: 50 exec/s: 66 rss: 75Mb L: 19/48 MS: 1 CopyPart- 00:08:25.487 [2024-12-09 15:41:20.620108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:25.487 [2024-12-09 15:41:20.620137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.487 [2024-12-09 15:41:20.620195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:281470681743360 len:65536 00:08:25.487 [2024-12-09 15:41:20.620211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.487 [2024-12-09 15:41:20.620263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:51681341472767 len:1 00:08:25.487 [2024-12-09 15:41:20.620280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.487 [2024-12-09 15:41:20.620333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:25.487 [2024-12-09 15:41:20.620349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.487 #67 NEW cov: 12407 ft: 15515 corp: 33/1115b lim: 50 exec/s: 67 rss: 75Mb L: 48/48 MS: 1 ChangeByte- 00:08:25.487 [2024-12-09 15:41:20.659932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:124554051584 len:1 00:08:25.487 [2024-12-09 15:41:20.659959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.487 [2024-12-09 15:41:20.660021] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:65280 len:1 00:08:25.487 [2024-12-09 15:41:20.660038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.487 #68 NEW cov: 12407 ft: 15556 corp: 34/1144b lim: 50 exec/s: 68 rss: 75Mb L: 29/48 MS: 1 ChangeBinInt- 00:08:25.487 [2024-12-09 15:41:20.700144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:25.487 [2024-12-09 15:41:20.700171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.487 [2024-12-09 15:41:20.700221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:25.487 [2024-12-09 15:41:20.700238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.487 [2024-12-09 15:41:20.700297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:25.487 [2024-12-09 15:41:20.700312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.748 #69 NEW cov: 12407 ft: 15585 corp: 35/1176b lim: 50 exec/s: 69 rss: 75Mb L: 32/48 MS: 1 ShuffleBytes- 00:08:25.748 [2024-12-09 15:41:20.740140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:28587302322239 len:1 00:08:25.748 [2024-12-09 15:41:20.740167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.748 [2024-12-09 15:41:20.740206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:139 00:08:25.748 [2024-12-09 15:41:20.740222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.748 #70 NEW cov: 12407 ft: 15613 corp: 36/1196b lim: 50 exec/s: 35 rss: 75Mb L: 20/48 MS: 1 InsertByte- 00:08:25.748 #70 DONE cov: 12407 ft: 15613 corp: 36/1196b lim: 50 exec/s: 35 rss: 75Mb 00:08:25.748 ###### Recommended dictionary. ###### 00:08:25.748 "\000\000\000\000\000\000\000\000" # Uses: 3 00:08:25.748 "]5(>'\224R\000" # Uses: 2 00:08:25.748 ###### End of recommended dictionary. ###### 00:08:25.748 Done 70 runs in 2 second(s) 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:25.748 15:41:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:08:25.748 [2024-12-09 15:41:20.917475] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:25.748 [2024-12-09 15:41:20.917549] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912046 ] 00:08:26.007 [2024-12-09 15:41:21.185099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.267 [2024-12-09 15:41:21.234004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.267 [2024-12-09 15:41:21.293457] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.267 [2024-12-09 15:41:21.309597] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:26.267 INFO: Running with entropic power schedule (0xFF, 100). 00:08:26.267 INFO: Seed: 384061423 00:08:26.267 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:08:26.267 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:08:26.267 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:26.267 INFO: A corpus is not provided, starting from an empty corpus 00:08:26.267 #2 INITED exec/s: 0 rss: 66Mb 00:08:26.267 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:26.267 This may also happen if the target rejected all inputs we tried so far 00:08:26.267 [2024-12-09 15:41:21.354554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:26.267 [2024-12-09 15:41:21.354590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.267 [2024-12-09 15:41:21.354642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:26.267 [2024-12-09 15:41:21.354661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.267 [2024-12-09 15:41:21.354692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:26.267 [2024-12-09 15:41:21.354709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.526 NEW_FUNC[1/718]: 0x45df18 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:08:26.526 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:26.526 #14 NEW cov: 12228 ft: 12232 corp: 2/57b lim: 90 exec/s: 0 rss: 74Mb L: 56/56 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:26.526 [2024-12-09 15:41:21.706386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:26.526 [2024-12-09 15:41:21.706431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.526 [2024-12-09 15:41:21.706483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:26.526 [2024-12-09 15:41:21.706502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.526 [2024-12-09 15:41:21.706532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:26.526 [2024-12-09 15:41:21.706549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.786 #15 NEW cov: 12351 ft: 12844 corp: 3/113b lim: 90 exec/s: 0 rss: 74Mb L: 56/56 MS: 1 ChangeBit- 00:08:26.786 [2024-12-09 15:41:21.806555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:26.786 [2024-12-09 15:41:21.806587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.786 [2024-12-09 15:41:21.806636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:26.786 [2024-12-09 15:41:21.806654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.786 [2024-12-09 15:41:21.806686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:26.786 [2024-12-09 15:41:21.806707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.786 #16 NEW cov: 12357 ft: 13189 corp: 4/181b lim: 90 exec/s: 0 rss: 74Mb L: 68/68 MS: 1 CrossOver- 00:08:26.786 [2024-12-09 15:41:21.896792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:26.786 [2024-12-09 15:41:21.896823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.786 [2024-12-09 15:41:21.896866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:26.786 [2024-12-09 15:41:21.896885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.786 [2024-12-09 15:41:21.896917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:26.786 [2024-12-09 15:41:21.896934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.786 #17 NEW cov: 12442 ft: 13491 corp: 5/237b lim: 90 exec/s: 0 rss: 74Mb L: 56/68 MS: 1 CopyPart- 00:08:26.786 [2024-12-09 15:41:21.956985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:26.786 [2024-12-09 15:41:21.957016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.786 [2024-12-09 15:41:21.957050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:26.786 [2024-12-09 15:41:21.957068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.786 [2024-12-09 15:41:21.957100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:26.786 [2024-12-09 15:41:21.957117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.786 [2024-12-09 15:41:21.957163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:26.786 [2024-12-09 15:41:21.957181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:26.786 #18 NEW cov: 12442 ft: 13945 corp: 6/315b lim: 90 exec/s: 0 rss: 74Mb L: 78/78 MS: 1 CrossOver- 00:08:27.045 [2024-12-09 15:41:22.017072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.045 [2024-12-09 15:41:22.017103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.045 [2024-12-09 15:41:22.017137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.045 [2024-12-09 15:41:22.017155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.045 [2024-12-09 15:41:22.017187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.045 [2024-12-09 15:41:22.017204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.045 #19 NEW cov: 12442 ft: 14041 corp: 7/383b lim: 90 exec/s: 0 rss: 74Mb L: 68/78 MS: 1 ChangeByte- 00:08:27.045 [2024-12-09 15:41:22.107405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.045 [2024-12-09 15:41:22.107436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.045 [2024-12-09 15:41:22.107470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.045 [2024-12-09 15:41:22.107488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.045 [2024-12-09 15:41:22.107524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.045 [2024-12-09 15:41:22.107540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.045 [2024-12-09 15:41:22.107570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:27.045 [2024-12-09 15:41:22.107586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.045 #20 NEW cov: 12442 ft: 14152 corp: 8/461b lim: 90 exec/s: 0 rss: 74Mb L: 78/78 MS: 1 CrossOver- 00:08:27.045 [2024-12-09 15:41:22.197588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.045 [2024-12-09 15:41:22.197618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.045 [2024-12-09 15:41:22.197666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.045 [2024-12-09 15:41:22.197684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.045 [2024-12-09 15:41:22.197715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.045 [2024-12-09 15:41:22.197732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.045 [2024-12-09 15:41:22.197762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:27.045 [2024-12-09 15:41:22.197779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.304 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:27.304 #21 NEW cov: 12459 ft: 14186 corp: 9/545b lim: 90 exec/s: 0 rss: 74Mb L: 84/84 MS: 1 InsertRepeatedBytes- 00:08:27.304 [2024-12-09 15:41:22.297913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.304 [2024-12-09 15:41:22.297945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.304 [2024-12-09 15:41:22.297979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.304 [2024-12-09 15:41:22.297997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.304 [2024-12-09 15:41:22.298028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.304 [2024-12-09 15:41:22.298045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.304 [2024-12-09 15:41:22.298075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:27.304 [2024-12-09 15:41:22.298091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.304 #22 NEW cov: 12459 ft: 14198 corp: 10/623b lim: 90 exec/s: 0 rss: 74Mb L: 78/84 MS: 1 CopyPart- 00:08:27.304 [2024-12-09 15:41:22.357982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.304 [2024-12-09 15:41:22.358014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.304 [2024-12-09 15:41:22.358048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.304 [2024-12-09 15:41:22.358066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.304 [2024-12-09 15:41:22.358097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.304 [2024-12-09 15:41:22.358119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.304 #23 NEW cov: 12459 ft: 14249 corp: 11/679b lim: 90 exec/s: 23 rss: 74Mb L: 56/84 MS: 1 CopyPart- 00:08:27.305 [2024-12-09 15:41:22.418148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.305 [2024-12-09 15:41:22.418179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.305 [2024-12-09 15:41:22.418227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.305 [2024-12-09 15:41:22.418245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.305 [2024-12-09 15:41:22.418277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.305 [2024-12-09 15:41:22.418293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.305 [2024-12-09 15:41:22.418323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:27.305 [2024-12-09 15:41:22.418340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.305 #24 NEW cov: 12459 ft: 14268 corp: 12/758b lim: 90 exec/s: 24 rss: 74Mb L: 79/84 MS: 1 CrossOver- 00:08:27.305 [2024-12-09 15:41:22.468314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.305 [2024-12-09 15:41:22.468345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.305 [2024-12-09 15:41:22.468379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.305 [2024-12-09 15:41:22.468396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.305 [2024-12-09 15:41:22.468428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.305 [2024-12-09 15:41:22.468444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.305 [2024-12-09 15:41:22.468475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:27.305 [2024-12-09 15:41:22.468491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.564 #25 NEW cov: 12459 ft: 14302 corp: 13/836b lim: 90 exec/s: 25 rss: 75Mb L: 78/84 MS: 1 ChangeByte- 00:08:27.564 [2024-12-09 15:41:22.558554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.564 [2024-12-09 15:41:22.558585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.564 [2024-12-09 15:41:22.558633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.564 [2024-12-09 15:41:22.558650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.564 [2024-12-09 15:41:22.558681] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.564 [2024-12-09 15:41:22.558698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.564 [2024-12-09 15:41:22.558729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:27.564 [2024-12-09 15:41:22.558746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.564 #26 NEW cov: 12459 ft: 14324 corp: 14/915b lim: 90 exec/s: 26 rss: 75Mb L: 79/84 MS: 1 ChangeBit- 00:08:27.564 [2024-12-09 15:41:22.658797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.564 [2024-12-09 15:41:22.658827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.564 [2024-12-09 15:41:22.658881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.564 [2024-12-09 15:41:22.658900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.564 [2024-12-09 15:41:22.658931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.564 [2024-12-09 15:41:22.658949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.564 [2024-12-09 15:41:22.658979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:27.564 [2024-12-09 15:41:22.658995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.564 #27 NEW cov: 12459 ft: 14352 corp: 15/994b lim: 90 exec/s: 27 rss: 75Mb L: 79/84 MS: 1 CrossOver- 00:08:27.564 [2024-12-09 15:41:22.718942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.564 [2024-12-09 15:41:22.718973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.564 [2024-12-09 15:41:22.719022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.564 [2024-12-09 15:41:22.719041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.564 [2024-12-09 15:41:22.719071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.564 [2024-12-09 15:41:22.719089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.564 [2024-12-09 15:41:22.719119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:27.564 [2024-12-09 15:41:22.719136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.823 #28 NEW cov: 12459 ft: 14401 corp: 16/1073b lim: 90 exec/s: 28 rss: 75Mb L: 79/84 MS: 1 InsertByte- 00:08:27.823 [2024-12-09 15:41:22.819215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.823 [2024-12-09 15:41:22.819246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.823 [2024-12-09 15:41:22.819280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.823 [2024-12-09 15:41:22.819298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.823 [2024-12-09 15:41:22.819330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.823 [2024-12-09 15:41:22.819347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.823 #29 NEW cov: 12459 ft: 14440 corp: 17/1141b lim: 90 exec/s: 29 rss: 75Mb L: 68/84 MS: 1 CrossOver- 00:08:27.823 [2024-12-09 15:41:22.869294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.823 [2024-12-09 15:41:22.869325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.823 [2024-12-09 15:41:22.869359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.823 [2024-12-09 15:41:22.869377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.823 [2024-12-09 15:41:22.869412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.823 [2024-12-09 15:41:22.869429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.823 #30 NEW cov: 12459 ft: 14453 corp: 18/1196b lim: 90 exec/s: 30 rss: 75Mb L: 55/84 MS: 1 CrossOver- 00:08:27.823 [2024-12-09 15:41:22.929453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.823 [2024-12-09 15:41:22.929483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.823 [2024-12-09 15:41:22.929517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.823 [2024-12-09 15:41:22.929535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.823 [2024-12-09 15:41:22.929566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.823 [2024-12-09 15:41:22.929583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.823 #31 NEW cov: 12459 ft: 14478 corp: 19/1258b lim: 90 exec/s: 31 rss: 75Mb L: 62/84 MS: 1 InsertRepeatedBytes- 00:08:27.824 [2024-12-09 15:41:23.019743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.824 [2024-12-09 15:41:23.019772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.824 [2024-12-09 15:41:23.019821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.824 [2024-12-09 15:41:23.019839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.824 [2024-12-09 15:41:23.019878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.824 [2024-12-09 15:41:23.019911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.824 [2024-12-09 15:41:23.019942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:27.824 [2024-12-09 15:41:23.019959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.082 #32 NEW cov: 12459 ft: 14495 corp: 20/1336b lim: 90 exec/s: 32 rss: 75Mb L: 78/84 MS: 1 CopyPart- 00:08:28.082 [2024-12-09 15:41:23.079913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.082 [2024-12-09 15:41:23.079943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.082 [2024-12-09 15:41:23.079977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.082 [2024-12-09 15:41:23.079994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.083 [2024-12-09 15:41:23.080026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.083 [2024-12-09 15:41:23.080042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.083 [2024-12-09 15:41:23.080088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:28.083 [2024-12-09 15:41:23.080105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.083 #33 NEW cov: 12459 ft: 14512 corp: 21/1415b lim: 90 exec/s: 33 rss: 75Mb L: 79/84 MS: 1 ChangeByte- 00:08:28.083 [2024-12-09 15:41:23.170103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.083 [2024-12-09 15:41:23.170138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.083 [2024-12-09 15:41:23.170171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.083 [2024-12-09 15:41:23.170189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.083 [2024-12-09 15:41:23.170221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.083 [2024-12-09 15:41:23.170238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.083 #34 NEW cov: 12459 ft: 14530 corp: 22/1471b lim: 90 exec/s: 34 rss: 75Mb L: 56/84 MS: 1 ChangeBinInt- 00:08:28.083 [2024-12-09 15:41:23.230172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.083 [2024-12-09 15:41:23.230202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.083 [2024-12-09 15:41:23.230237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.083 [2024-12-09 15:41:23.230255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.083 #35 NEW cov: 12466 ft: 14878 corp: 23/1512b lim: 90 exec/s: 35 rss: 75Mb L: 41/84 MS: 1 EraseBytes- 00:08:28.083 [2024-12-09 15:41:23.290450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.083 [2024-12-09 15:41:23.290480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.083 [2024-12-09 15:41:23.290529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.083 [2024-12-09 15:41:23.290547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.083 [2024-12-09 15:41:23.290577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.083 [2024-12-09 15:41:23.290594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.083 [2024-12-09 15:41:23.290624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:28.083 [2024-12-09 15:41:23.290641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.342 #36 NEW cov: 12466 ft: 14895 corp: 24/1591b lim: 90 exec/s: 18 rss: 75Mb L: 79/84 MS: 1 ChangeBinInt- 00:08:28.343 #36 DONE cov: 12466 ft: 14895 corp: 24/1591b lim: 90 exec/s: 18 rss: 75Mb 00:08:28.343 Done 36 runs in 2 second(s) 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:28.343 15:41:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:08:28.343 [2024-12-09 15:41:23.530780] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:28.343 [2024-12-09 15:41:23.530887] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912404 ] 00:08:28.602 [2024-12-09 15:41:23.818362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.861 [2024-12-09 15:41:23.866260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.861 [2024-12-09 15:41:23.925514] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.861 [2024-12-09 15:41:23.941658] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:08:28.861 INFO: Running with entropic power schedule (0xFF, 100). 00:08:28.861 INFO: Seed: 3016061502 00:08:28.861 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:08:28.861 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:08:28.861 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:28.861 INFO: A corpus is not provided, starting from an empty corpus 00:08:28.861 #2 INITED exec/s: 0 rss: 66Mb 00:08:28.861 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:28.861 This may also happen if the target rejected all inputs we tried so far 00:08:28.861 [2024-12-09 15:41:23.997038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:28.861 [2024-12-09 15:41:23.997071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.120 NEW_FUNC[1/718]: 0x461148 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:08:29.120 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:29.120 #7 NEW cov: 12213 ft: 12191 corp: 2/14b lim: 50 exec/s: 0 rss: 74Mb L: 13/13 MS: 5 ShuffleBytes-ChangeBit-ChangeBit-InsertByte-InsertRepeatedBytes- 00:08:29.120 [2024-12-09 15:41:24.327892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:29.120 [2024-12-09 15:41:24.327929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.379 #8 NEW cov: 12326 ft: 12758 corp: 3/27b lim: 50 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeBit- 00:08:29.379 [2024-12-09 15:41:24.388362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:29.379 [2024-12-09 15:41:24.388392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.379 [2024-12-09 15:41:24.388432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:29.379 [2024-12-09 15:41:24.388447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.379 [2024-12-09 15:41:24.388499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:29.379 [2024-12-09 15:41:24.388514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.379 [2024-12-09 15:41:24.388567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:29.380 [2024-12-09 15:41:24.388582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.380 #9 NEW cov: 12332 ft: 13880 corp: 4/71b lim: 50 exec/s: 0 rss: 74Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:08:29.380 [2024-12-09 15:41:24.428485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:29.380 [2024-12-09 15:41:24.428513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.380 [2024-12-09 15:41:24.428551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:29.380 [2024-12-09 15:41:24.428565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.380 [2024-12-09 15:41:24.428620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:29.380 [2024-12-09 15:41:24.428636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.380 [2024-12-09 15:41:24.428689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:29.380 [2024-12-09 15:41:24.428704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.380 #10 NEW cov: 12417 ft: 14105 corp: 5/115b lim: 50 exec/s: 0 rss: 74Mb L: 44/44 MS: 1 ChangeByte- 00:08:29.380 [2024-12-09 15:41:24.488744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:29.380 [2024-12-09 15:41:24.488771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.380 [2024-12-09 15:41:24.488836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:29.380 [2024-12-09 15:41:24.488857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.380 [2024-12-09 15:41:24.488909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:29.380 [2024-12-09 15:41:24.488935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.380 [2024-12-09 15:41:24.488988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:29.380 [2024-12-09 15:41:24.489003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.380 [2024-12-09 15:41:24.489056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:29.380 [2024-12-09 15:41:24.489072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:29.380 #11 NEW cov: 12417 ft: 14312 corp: 6/165b lim: 50 exec/s: 0 rss: 74Mb L: 50/50 MS: 1 CopyPart- 00:08:29.380 [2024-12-09 15:41:24.528910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:29.380 [2024-12-09 15:41:24.528937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.380 [2024-12-09 15:41:24.528981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:29.380 [2024-12-09 15:41:24.528998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.380 [2024-12-09 15:41:24.529049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:29.380 [2024-12-09 15:41:24.529064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.380 [2024-12-09 15:41:24.529133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:29.380 [2024-12-09 15:41:24.529149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.380 [2024-12-09 15:41:24.529202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:29.380 [2024-12-09 15:41:24.529217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:29.380 #13 NEW cov: 12417 ft: 14363 corp: 7/215b lim: 50 exec/s: 0 rss: 74Mb L: 50/50 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:29.380 [2024-12-09 15:41:24.568415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:29.380 [2024-12-09 15:41:24.568441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.380 #14 NEW cov: 12417 ft: 14485 corp: 8/228b lim: 50 exec/s: 0 rss: 74Mb L: 13/50 MS: 1 CMP- DE: "\001\000\000\005"- 00:08:29.640 [2024-12-09 15:41:24.608541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:29.640 [2024-12-09 15:41:24.608569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.640 #15 NEW cov: 12417 ft: 14539 corp: 9/241b lim: 50 exec/s: 0 rss: 74Mb L: 13/50 MS: 1 ChangeBit- 00:08:29.640 [2024-12-09 15:41:24.648662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:29.640 [2024-12-09 15:41:24.648690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.640 #16 NEW cov: 12417 ft: 14573 corp: 10/255b lim: 50 exec/s: 0 rss: 74Mb L: 14/50 MS: 1 InsertByte- 00:08:29.640 [2024-12-09 15:41:24.709248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:29.640 [2024-12-09 15:41:24.709275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.640 [2024-12-09 15:41:24.709325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:29.640 [2024-12-09 15:41:24.709341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.640 [2024-12-09 15:41:24.709393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:29.640 [2024-12-09 15:41:24.709408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.640 [2024-12-09 15:41:24.709461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:29.640 [2024-12-09 15:41:24.709476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.640 #20 NEW cov: 12417 ft: 14616 corp: 11/299b lim: 50 exec/s: 0 rss: 74Mb L: 44/50 MS: 4 EraseBytes-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:08:29.640 [2024-12-09 15:41:24.769560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:29.640 [2024-12-09 15:41:24.769586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.640 [2024-12-09 15:41:24.769647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:29.640 [2024-12-09 15:41:24.769663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.640 [2024-12-09 15:41:24.769718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:29.640 [2024-12-09 15:41:24.769733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.640 [2024-12-09 15:41:24.769785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:29.640 [2024-12-09 15:41:24.769800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.640 [2024-12-09 15:41:24.769858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:29.640 [2024-12-09 15:41:24.769874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:29.640 #21 NEW cov: 12417 ft: 14628 corp: 12/349b lim: 50 exec/s: 0 rss: 74Mb L: 50/50 MS: 1 CrossOver- 00:08:29.640 [2024-12-09 15:41:24.829170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:29.640 [2024-12-09 15:41:24.829198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.640 #24 NEW cov: 12417 ft: 14652 corp: 13/365b lim: 50 exec/s: 0 rss: 74Mb L: 16/50 MS: 3 EraseBytes-InsertByte-InsertRepeatedBytes- 00:08:29.899 [2024-12-09 15:41:24.869285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:29.899 [2024-12-09 15:41:24.869313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.899 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:29.899 #25 NEW cov: 12440 ft: 14700 corp: 14/379b lim: 50 exec/s: 0 rss: 74Mb L: 14/50 MS: 1 ChangeBit- 00:08:29.899 [2024-12-09 15:41:24.929468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:29.899 [2024-12-09 15:41:24.929498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.899 #30 NEW cov: 12440 ft: 14763 corp: 15/397b lim: 50 exec/s: 0 rss: 74Mb L: 18/50 MS: 5 ChangeBinInt-ChangeByte-PersAutoDict-CrossOver-CrossOver- DE: "\001\000\000\005"- 00:08:29.899 [2024-12-09 15:41:24.969977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:29.899 [2024-12-09 15:41:24.970015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.899 [2024-12-09 15:41:24.970075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:29.899 [2024-12-09 15:41:24.970092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.899 [2024-12-09 15:41:24.970144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:29.899 [2024-12-09 15:41:24.970159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.899 [2024-12-09 15:41:24.970212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:29.899 [2024-12-09 15:41:24.970227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.899 #31 NEW cov: 12440 ft: 14770 corp: 16/441b lim: 50 exec/s: 31 rss: 74Mb L: 44/50 MS: 1 ChangeBinInt- 00:08:29.899 [2024-12-09 15:41:25.029717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:29.899 [2024-12-09 15:41:25.029748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.899 #32 NEW cov: 12440 ft: 14808 corp: 17/452b lim: 50 exec/s: 32 rss: 74Mb L: 11/50 MS: 1 EraseBytes- 00:08:29.899 [2024-12-09 15:41:25.070072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:29.899 [2024-12-09 15:41:25.070100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.899 [2024-12-09 15:41:25.070142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:29.899 [2024-12-09 15:41:25.070157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.899 [2024-12-09 15:41:25.070210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:29.899 [2024-12-09 15:41:25.070226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.899 #33 NEW cov: 12440 ft: 15107 corp: 18/485b lim: 50 exec/s: 33 rss: 74Mb L: 33/50 MS: 1 InsertRepeatedBytes- 00:08:30.159 [2024-12-09 15:41:25.130378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.159 [2024-12-09 15:41:25.130406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.159 [2024-12-09 15:41:25.130453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.159 [2024-12-09 15:41:25.130468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.159 [2024-12-09 15:41:25.130520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.159 [2024-12-09 15:41:25.130552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.159 [2024-12-09 15:41:25.130606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:30.159 [2024-12-09 15:41:25.130621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.159 #34 NEW cov: 12440 ft: 15160 corp: 19/531b lim: 50 exec/s: 34 rss: 75Mb L: 46/50 MS: 1 EraseBytes- 00:08:30.159 [2024-12-09 15:41:25.190484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.159 [2024-12-09 15:41:25.190511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.159 [2024-12-09 15:41:25.190559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.159 [2024-12-09 15:41:25.190574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.159 [2024-12-09 15:41:25.190628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.159 [2024-12-09 15:41:25.190643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.159 [2024-12-09 15:41:25.190699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:30.159 [2024-12-09 15:41:25.190715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.159 #35 NEW cov: 12440 ft: 15175 corp: 20/572b lim: 50 exec/s: 35 rss: 75Mb L: 41/50 MS: 1 InsertRepeatedBytes- 00:08:30.159 [2024-12-09 15:41:25.230825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.159 [2024-12-09 15:41:25.230857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.159 [2024-12-09 15:41:25.230938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.159 [2024-12-09 15:41:25.230955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.159 [2024-12-09 15:41:25.231010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.159 [2024-12-09 15:41:25.231025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.159 [2024-12-09 15:41:25.231078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:30.159 [2024-12-09 15:41:25.231094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.159 [2024-12-09 15:41:25.231147] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:30.159 [2024-12-09 15:41:25.231163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:30.159 #36 NEW cov: 12440 ft: 15179 corp: 21/622b lim: 50 exec/s: 36 rss: 75Mb L: 50/50 MS: 1 CopyPart- 00:08:30.159 [2024-12-09 15:41:25.290669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.159 [2024-12-09 15:41:25.290696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.159 [2024-12-09 15:41:25.290752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.159 [2024-12-09 15:41:25.290769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.159 [2024-12-09 15:41:25.290823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.159 [2024-12-09 15:41:25.290837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.159 #37 NEW cov: 12440 ft: 15193 corp: 22/658b lim: 50 exec/s: 37 rss: 75Mb L: 36/50 MS: 1 EraseBytes- 00:08:30.159 [2024-12-09 15:41:25.330806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.159 [2024-12-09 15:41:25.330834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.159 [2024-12-09 15:41:25.330898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.159 [2024-12-09 15:41:25.330915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.159 [2024-12-09 15:41:25.330974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.159 [2024-12-09 15:41:25.331001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.159 [2024-12-09 15:41:25.370913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.159 [2024-12-09 15:41:25.370940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.159 [2024-12-09 15:41:25.371012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.159 [2024-12-09 15:41:25.371028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.159 [2024-12-09 15:41:25.371083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.160 [2024-12-09 15:41:25.371098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.418 #39 NEW cov: 12440 ft: 15215 corp: 23/696b lim: 50 exec/s: 39 rss: 75Mb L: 38/50 MS: 2 EraseBytes-PersAutoDict- DE: "\001\000\000\005"- 00:08:30.418 [2024-12-09 15:41:25.411328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.418 [2024-12-09 15:41:25.411355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.418 [2024-12-09 15:41:25.411425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.418 [2024-12-09 15:41:25.411442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.418 [2024-12-09 15:41:25.411496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.419 [2024-12-09 15:41:25.411512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.419 [2024-12-09 15:41:25.411565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:30.419 [2024-12-09 15:41:25.411581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.419 [2024-12-09 15:41:25.411638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:30.419 [2024-12-09 15:41:25.411655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:30.419 #40 NEW cov: 12440 ft: 15267 corp: 24/746b lim: 50 exec/s: 40 rss: 75Mb L: 50/50 MS: 1 CopyPart- 00:08:30.419 [2024-12-09 15:41:25.451152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.419 [2024-12-09 15:41:25.451179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.419 [2024-12-09 15:41:25.451231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.419 [2024-12-09 15:41:25.451248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.419 [2024-12-09 15:41:25.451304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.419 [2024-12-09 15:41:25.451318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.419 #41 NEW cov: 12440 ft: 15314 corp: 25/784b lim: 50 exec/s: 41 rss: 75Mb L: 38/50 MS: 1 ChangeBinInt- 00:08:30.419 [2024-12-09 15:41:25.511155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.419 [2024-12-09 15:41:25.511181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.419 [2024-12-09 15:41:25.511235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.419 [2024-12-09 15:41:25.511252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.419 #42 NEW cov: 12440 ft: 15594 corp: 26/811b lim: 50 exec/s: 42 rss: 75Mb L: 27/50 MS: 1 InsertRepeatedBytes- 00:08:30.419 [2024-12-09 15:41:25.571183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.419 [2024-12-09 15:41:25.571210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.419 #43 NEW cov: 12440 ft: 15616 corp: 27/829b lim: 50 exec/s: 43 rss: 75Mb L: 18/50 MS: 1 EraseBytes- 00:08:30.419 [2024-12-09 15:41:25.631808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.419 [2024-12-09 15:41:25.631834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.419 [2024-12-09 15:41:25.631900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.419 [2024-12-09 15:41:25.631927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.419 [2024-12-09 15:41:25.631979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.419 [2024-12-09 15:41:25.631993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.419 [2024-12-09 15:41:25.632046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:30.419 [2024-12-09 15:41:25.632063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.677 #44 NEW cov: 12440 ft: 15671 corp: 28/869b lim: 50 exec/s: 44 rss: 75Mb L: 40/50 MS: 1 InsertRepeatedBytes- 00:08:30.677 [2024-12-09 15:41:25.691510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.677 [2024-12-09 15:41:25.691537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.677 #50 NEW cov: 12440 ft: 15676 corp: 29/885b lim: 50 exec/s: 50 rss: 75Mb L: 16/50 MS: 1 ShuffleBytes- 00:08:30.677 [2024-12-09 15:41:25.751830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.677 [2024-12-09 15:41:25.751862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.677 [2024-12-09 15:41:25.751918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.677 [2024-12-09 15:41:25.751934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.677 #51 NEW cov: 12440 ft: 15687 corp: 30/905b lim: 50 exec/s: 51 rss: 75Mb L: 20/50 MS: 1 PersAutoDict- DE: "\001\000\000\005"- 00:08:30.677 [2024-12-09 15:41:25.812019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.677 [2024-12-09 15:41:25.812046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.677 [2024-12-09 15:41:25.812099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.677 [2024-12-09 15:41:25.812116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.677 #52 NEW cov: 12440 ft: 15722 corp: 31/931b lim: 50 exec/s: 52 rss: 75Mb L: 26/50 MS: 1 InsertRepeatedBytes- 00:08:30.677 [2024-12-09 15:41:25.852024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.677 [2024-12-09 15:41:25.852051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.677 #53 NEW cov: 12440 ft: 15758 corp: 32/945b lim: 50 exec/s: 53 rss: 75Mb L: 14/50 MS: 1 ShuffleBytes- 00:08:30.677 [2024-12-09 15:41:25.892239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.677 [2024-12-09 15:41:25.892265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.677 [2024-12-09 15:41:25.892320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.677 [2024-12-09 15:41:25.892337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.936 #54 NEW cov: 12440 ft: 15770 corp: 33/969b lim: 50 exec/s: 54 rss: 75Mb L: 24/50 MS: 1 CrossOver- 00:08:30.936 [2024-12-09 15:41:25.952864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.936 [2024-12-09 15:41:25.952894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.936 [2024-12-09 15:41:25.952957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.936 [2024-12-09 15:41:25.952974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.936 [2024-12-09 15:41:25.953025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.936 [2024-12-09 15:41:25.953040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.936 [2024-12-09 15:41:25.953090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:30.936 [2024-12-09 15:41:25.953106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.936 [2024-12-09 15:41:25.953158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:30.936 [2024-12-09 15:41:25.953173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:30.936 #55 NEW cov: 12440 ft: 15775 corp: 34/1019b lim: 50 exec/s: 27 rss: 75Mb L: 50/50 MS: 1 ChangeByte- 00:08:30.936 #55 DONE cov: 12440 ft: 15775 corp: 34/1019b lim: 50 exec/s: 27 rss: 75Mb 00:08:30.936 ###### Recommended dictionary. ###### 00:08:30.936 "\001\000\000\005" # Uses: 3 00:08:30.936 ###### End of recommended dictionary. ###### 00:08:30.936 Done 55 runs in 2 second(s) 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:30.936 15:41:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:08:30.936 [2024-12-09 15:41:26.150034] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:30.936 [2024-12-09 15:41:26.150112] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912764 ] 00:08:31.502 [2024-12-09 15:41:26.426636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.502 [2024-12-09 15:41:26.476771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.502 [2024-12-09 15:41:26.536031] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.502 [2024-12-09 15:41:26.552174] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:08:31.502 INFO: Running with entropic power schedule (0xFF, 100). 00:08:31.502 INFO: Seed: 1330091449 00:08:31.502 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:08:31.502 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:08:31.502 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:31.502 INFO: A corpus is not provided, starting from an empty corpus 00:08:31.502 #2 INITED exec/s: 0 rss: 66Mb 00:08:31.502 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:31.502 This may also happen if the target rejected all inputs we tried so far 00:08:31.502 [2024-12-09 15:41:26.601711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:31.502 [2024-12-09 15:41:26.601742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.502 [2024-12-09 15:41:26.601779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:31.502 [2024-12-09 15:41:26.601796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.502 [2024-12-09 15:41:26.601855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:31.502 [2024-12-09 15:41:26.601872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.502 [2024-12-09 15:41:26.601944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:31.502 [2024-12-09 15:41:26.601961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.762 NEW_FUNC[1/718]: 0x463418 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:31.762 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:31.762 #13 NEW cov: 12239 ft: 12238 corp: 2/73b lim: 85 exec/s: 0 rss: 74Mb L: 72/72 MS: 1 InsertRepeatedBytes- 00:08:31.762 [2024-12-09 15:41:26.932531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:31.762 [2024-12-09 15:41:26.932569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.762 [2024-12-09 15:41:26.932637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:31.762 [2024-12-09 15:41:26.932653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.762 [2024-12-09 15:41:26.932707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:31.762 [2024-12-09 15:41:26.932723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.762 [2024-12-09 15:41:26.932776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:31.762 [2024-12-09 15:41:26.932792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.762 #14 NEW cov: 12352 ft: 12687 corp: 3/145b lim: 85 exec/s: 0 rss: 74Mb L: 72/72 MS: 1 ChangeBinInt- 00:08:32.021 [2024-12-09 15:41:26.992631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.021 [2024-12-09 15:41:26.992661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.021 [2024-12-09 15:41:26.992714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.021 [2024-12-09 15:41:26.992731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.021 [2024-12-09 15:41:26.992783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.021 [2024-12-09 15:41:26.992798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.021 [2024-12-09 15:41:26.992854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.021 [2024-12-09 15:41:26.992872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.021 #15 NEW cov: 12358 ft: 12880 corp: 4/218b lim: 85 exec/s: 0 rss: 74Mb L: 73/73 MS: 1 InsertByte- 00:08:32.021 [2024-12-09 15:41:27.052725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.021 [2024-12-09 15:41:27.052753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.021 [2024-12-09 15:41:27.052815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.021 [2024-12-09 15:41:27.052833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.021 [2024-12-09 15:41:27.052889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.021 [2024-12-09 15:41:27.052906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.021 [2024-12-09 15:41:27.052961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.021 [2024-12-09 15:41:27.052977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.021 #16 NEW cov: 12443 ft: 13310 corp: 5/290b lim: 85 exec/s: 0 rss: 74Mb L: 72/73 MS: 1 ChangeByte- 00:08:32.021 [2024-12-09 15:41:27.092819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.021 [2024-12-09 15:41:27.092850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.021 [2024-12-09 15:41:27.092899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.021 [2024-12-09 15:41:27.092915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.021 [2024-12-09 15:41:27.092966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.021 [2024-12-09 15:41:27.092982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.021 [2024-12-09 15:41:27.093033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.021 [2024-12-09 15:41:27.093049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.021 #17 NEW cov: 12443 ft: 13449 corp: 6/362b lim: 85 exec/s: 0 rss: 74Mb L: 72/73 MS: 1 ChangeBinInt- 00:08:32.021 [2024-12-09 15:41:27.153023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.021 [2024-12-09 15:41:27.153050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.021 [2024-12-09 15:41:27.153092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.021 [2024-12-09 15:41:27.153108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.021 [2024-12-09 15:41:27.153157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.021 [2024-12-09 15:41:27.153172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.021 [2024-12-09 15:41:27.153227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.021 [2024-12-09 15:41:27.153243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.021 #18 NEW cov: 12443 ft: 13525 corp: 7/439b lim: 85 exec/s: 0 rss: 74Mb L: 77/77 MS: 1 InsertRepeatedBytes- 00:08:32.021 [2024-12-09 15:41:27.212878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.021 [2024-12-09 15:41:27.212905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.021 [2024-12-09 15:41:27.212962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.021 [2024-12-09 15:41:27.212977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.021 #22 NEW cov: 12443 ft: 14018 corp: 8/473b lim: 85 exec/s: 0 rss: 74Mb L: 34/77 MS: 4 ChangeBit-InsertByte-CMP-InsertRepeatedBytes- DE: "\000\000\000\000\000\000\000H"- 00:08:32.281 [2024-12-09 15:41:27.253280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.281 [2024-12-09 15:41:27.253308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.281 [2024-12-09 15:41:27.253355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.281 [2024-12-09 15:41:27.253372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.281 [2024-12-09 15:41:27.253421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.281 [2024-12-09 15:41:27.253436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.281 [2024-12-09 15:41:27.253488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.281 [2024-12-09 15:41:27.253502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.281 #23 NEW cov: 12443 ft: 14137 corp: 9/554b lim: 85 exec/s: 0 rss: 74Mb L: 81/81 MS: 1 InsertRepeatedBytes- 00:08:32.281 [2024-12-09 15:41:27.293407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.281 [2024-12-09 15:41:27.293434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.281 [2024-12-09 15:41:27.293498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.281 [2024-12-09 15:41:27.293514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.281 [2024-12-09 15:41:27.293568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.281 [2024-12-09 15:41:27.293583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.281 [2024-12-09 15:41:27.293638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.281 [2024-12-09 15:41:27.293654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.281 #24 NEW cov: 12443 ft: 14198 corp: 10/626b lim: 85 exec/s: 0 rss: 74Mb L: 72/81 MS: 1 ChangeBinInt- 00:08:32.281 [2024-12-09 15:41:27.333515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.281 [2024-12-09 15:41:27.333542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.281 [2024-12-09 15:41:27.333609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.281 [2024-12-09 15:41:27.333625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.281 [2024-12-09 15:41:27.333677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.281 [2024-12-09 15:41:27.333693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.281 [2024-12-09 15:41:27.333744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.281 [2024-12-09 15:41:27.333759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.282 #25 NEW cov: 12443 ft: 14240 corp: 11/699b lim: 85 exec/s: 0 rss: 74Mb L: 73/81 MS: 1 InsertByte- 00:08:32.282 [2024-12-09 15:41:27.373624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.282 [2024-12-09 15:41:27.373651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.282 [2024-12-09 15:41:27.373716] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.282 [2024-12-09 15:41:27.373732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.282 [2024-12-09 15:41:27.373786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.282 [2024-12-09 15:41:27.373800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.282 [2024-12-09 15:41:27.373859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.282 [2024-12-09 15:41:27.373875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.282 #26 NEW cov: 12443 ft: 14282 corp: 12/772b lim: 85 exec/s: 0 rss: 74Mb L: 73/81 MS: 1 InsertByte- 00:08:32.282 [2024-12-09 15:41:27.413745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.282 [2024-12-09 15:41:27.413773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.282 [2024-12-09 15:41:27.413822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.282 [2024-12-09 15:41:27.413838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.282 [2024-12-09 15:41:27.413912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.282 [2024-12-09 15:41:27.413929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.282 [2024-12-09 15:41:27.413981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.282 [2024-12-09 15:41:27.413995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.282 #27 NEW cov: 12443 ft: 14297 corp: 13/845b lim: 85 exec/s: 0 rss: 74Mb L: 73/81 MS: 1 InsertByte- 00:08:32.282 [2024-12-09 15:41:27.453909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.282 [2024-12-09 15:41:27.453935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.282 [2024-12-09 15:41:27.454002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.282 [2024-12-09 15:41:27.454019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.282 [2024-12-09 15:41:27.454071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.282 [2024-12-09 15:41:27.454087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.282 [2024-12-09 15:41:27.454138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.282 [2024-12-09 15:41:27.454153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.282 #28 NEW cov: 12443 ft: 14314 corp: 14/925b lim: 85 exec/s: 0 rss: 74Mb L: 80/81 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000H"- 00:08:32.282 [2024-12-09 15:41:27.493992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.282 [2024-12-09 15:41:27.494019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.282 [2024-12-09 15:41:27.494084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.282 [2024-12-09 15:41:27.494101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.282 [2024-12-09 15:41:27.494154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.282 [2024-12-09 15:41:27.494169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.282 [2024-12-09 15:41:27.494221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.282 [2024-12-09 15:41:27.494236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.541 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:32.541 #30 NEW cov: 12466 ft: 14350 corp: 15/997b lim: 85 exec/s: 0 rss: 74Mb L: 72/81 MS: 2 PersAutoDict-CrossOver- DE: "\000\000\000\000\000\000\000H"- 00:08:32.541 [2024-12-09 15:41:27.533836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.541 [2024-12-09 15:41:27.533869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.541 [2024-12-09 15:41:27.533923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.541 [2024-12-09 15:41:27.533940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.541 #31 NEW cov: 12466 ft: 14412 corp: 16/1035b lim: 85 exec/s: 0 rss: 74Mb L: 38/81 MS: 1 EraseBytes- 00:08:32.541 [2024-12-09 15:41:27.573959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.541 [2024-12-09 15:41:27.573987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.541 [2024-12-09 15:41:27.574063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.541 [2024-12-09 15:41:27.574080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.541 #32 NEW cov: 12466 ft: 14447 corp: 17/1083b lim: 85 exec/s: 32 rss: 74Mb L: 48/81 MS: 1 EraseBytes- 00:08:32.541 [2024-12-09 15:41:27.614337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.541 [2024-12-09 15:41:27.614364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.541 [2024-12-09 15:41:27.614428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.541 [2024-12-09 15:41:27.614444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.541 [2024-12-09 15:41:27.614497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.541 [2024-12-09 15:41:27.614514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.541 [2024-12-09 15:41:27.614567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.541 [2024-12-09 15:41:27.614582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.541 #33 NEW cov: 12466 ft: 14483 corp: 18/1156b lim: 85 exec/s: 33 rss: 74Mb L: 73/81 MS: 1 ChangeBit- 00:08:32.541 [2024-12-09 15:41:27.674523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.541 [2024-12-09 15:41:27.674550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.541 [2024-12-09 15:41:27.674602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.541 [2024-12-09 15:41:27.674617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.541 [2024-12-09 15:41:27.674667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.541 [2024-12-09 15:41:27.674681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.541 [2024-12-09 15:41:27.674732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.541 [2024-12-09 15:41:27.674746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.541 #34 NEW cov: 12466 ft: 14551 corp: 19/1238b lim: 85 exec/s: 34 rss: 74Mb L: 82/82 MS: 1 InsertRepeatedBytes- 00:08:32.542 [2024-12-09 15:41:27.734745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.542 [2024-12-09 15:41:27.734773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.542 [2024-12-09 15:41:27.734832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.542 [2024-12-09 15:41:27.734852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.542 [2024-12-09 15:41:27.734902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.542 [2024-12-09 15:41:27.734917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.542 [2024-12-09 15:41:27.734971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.542 [2024-12-09 15:41:27.734997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.801 #35 NEW cov: 12466 ft: 14574 corp: 20/1313b lim: 85 exec/s: 35 rss: 74Mb L: 75/82 MS: 1 CrossOver- 00:08:32.801 [2024-12-09 15:41:27.794573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.801 [2024-12-09 15:41:27.794604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.801 [2024-12-09 15:41:27.794660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.801 [2024-12-09 15:41:27.794676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.801 #36 NEW cov: 12466 ft: 14585 corp: 21/1351b lim: 85 exec/s: 36 rss: 74Mb L: 38/82 MS: 1 ChangeBinInt- 00:08:32.801 [2024-12-09 15:41:27.855055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.801 [2024-12-09 15:41:27.855083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.801 [2024-12-09 15:41:27.855131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.801 [2024-12-09 15:41:27.855148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.801 [2024-12-09 15:41:27.855200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.801 [2024-12-09 15:41:27.855215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.801 [2024-12-09 15:41:27.855268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.801 [2024-12-09 15:41:27.855281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.801 #37 NEW cov: 12466 ft: 14610 corp: 22/1433b lim: 85 exec/s: 37 rss: 75Mb L: 82/82 MS: 1 ChangeBit- 00:08:32.801 [2024-12-09 15:41:27.915254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.801 [2024-12-09 15:41:27.915281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.801 [2024-12-09 15:41:27.915328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.801 [2024-12-09 15:41:27.915343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.801 [2024-12-09 15:41:27.915395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.801 [2024-12-09 15:41:27.915410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.801 [2024-12-09 15:41:27.915462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.801 [2024-12-09 15:41:27.915477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.801 #38 NEW cov: 12466 ft: 14625 corp: 23/1513b lim: 85 exec/s: 38 rss: 75Mb L: 80/82 MS: 1 ChangeBit- 00:08:32.801 [2024-12-09 15:41:27.975371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.801 [2024-12-09 15:41:27.975397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.801 [2024-12-09 15:41:27.975462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.801 [2024-12-09 15:41:27.975479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.801 [2024-12-09 15:41:27.975534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.801 [2024-12-09 15:41:27.975550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.801 [2024-12-09 15:41:27.975605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.801 [2024-12-09 15:41:27.975619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.801 #39 NEW cov: 12466 ft: 14637 corp: 24/1589b lim: 85 exec/s: 39 rss: 75Mb L: 76/82 MS: 1 CopyPart- 00:08:32.801 [2024-12-09 15:41:28.015459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.801 [2024-12-09 15:41:28.015485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.801 [2024-12-09 15:41:28.015551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.801 [2024-12-09 15:41:28.015568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.801 [2024-12-09 15:41:28.015620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.801 [2024-12-09 15:41:28.015635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.801 [2024-12-09 15:41:28.015689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.801 [2024-12-09 15:41:28.015705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.060 #40 NEW cov: 12466 ft: 14643 corp: 25/1663b lim: 85 exec/s: 40 rss: 75Mb L: 74/82 MS: 1 InsertByte- 00:08:33.060 [2024-12-09 15:41:28.055609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.060 [2024-12-09 15:41:28.055636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.060 [2024-12-09 15:41:28.055687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.060 [2024-12-09 15:41:28.055703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.060 [2024-12-09 15:41:28.055771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.060 [2024-12-09 15:41:28.055788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.060 [2024-12-09 15:41:28.055841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.060 [2024-12-09 15:41:28.055860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.060 #41 NEW cov: 12466 ft: 14652 corp: 26/1739b lim: 85 exec/s: 41 rss: 75Mb L: 76/82 MS: 1 CMP- DE: "<\001\000\000"- 00:08:33.060 [2024-12-09 15:41:28.095674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.060 [2024-12-09 15:41:28.095701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.060 [2024-12-09 15:41:28.095740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.060 [2024-12-09 15:41:28.095755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.060 [2024-12-09 15:41:28.095806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.060 [2024-12-09 15:41:28.095822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.060 [2024-12-09 15:41:28.095880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.060 [2024-12-09 15:41:28.095895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.060 #42 NEW cov: 12466 ft: 14660 corp: 27/1821b lim: 85 exec/s: 42 rss: 75Mb L: 82/82 MS: 1 ChangeBit- 00:08:33.060 [2024-12-09 15:41:28.155856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.060 [2024-12-09 15:41:28.155884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.060 [2024-12-09 15:41:28.155931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.060 [2024-12-09 15:41:28.155947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.060 [2024-12-09 15:41:28.155999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.060 [2024-12-09 15:41:28.156014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.060 [2024-12-09 15:41:28.156065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.060 [2024-12-09 15:41:28.156081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.060 #43 NEW cov: 12466 ft: 14716 corp: 28/1904b lim: 85 exec/s: 43 rss: 75Mb L: 83/83 MS: 1 InsertRepeatedBytes- 00:08:33.060 [2024-12-09 15:41:28.216016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.060 [2024-12-09 15:41:28.216042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.060 [2024-12-09 15:41:28.216089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.060 [2024-12-09 15:41:28.216104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.060 [2024-12-09 15:41:28.216157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.060 [2024-12-09 15:41:28.216173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.060 [2024-12-09 15:41:28.216224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.060 [2024-12-09 15:41:28.216240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.060 [2024-12-09 15:41:28.276165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.060 [2024-12-09 15:41:28.276191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.060 [2024-12-09 15:41:28.276257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.060 [2024-12-09 15:41:28.276273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.060 [2024-12-09 15:41:28.276326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.060 [2024-12-09 15:41:28.276342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.060 [2024-12-09 15:41:28.276397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.060 [2024-12-09 15:41:28.276413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.320 #45 NEW cov: 12466 ft: 14723 corp: 29/1973b lim: 85 exec/s: 45 rss: 75Mb L: 69/83 MS: 2 EraseBytes-ChangeBinInt- 00:08:33.320 [2024-12-09 15:41:28.316298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.320 [2024-12-09 15:41:28.316329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.320 [2024-12-09 15:41:28.316369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.320 [2024-12-09 15:41:28.316387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.320 [2024-12-09 15:41:28.316440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.320 [2024-12-09 15:41:28.316455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.320 [2024-12-09 15:41:28.316508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.320 [2024-12-09 15:41:28.316524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.320 #46 NEW cov: 12466 ft: 14735 corp: 30/2057b lim: 85 exec/s: 46 rss: 75Mb L: 84/84 MS: 1 InsertRepeatedBytes- 00:08:33.320 [2024-12-09 15:41:28.356396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.320 [2024-12-09 15:41:28.356423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.320 [2024-12-09 15:41:28.356470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.320 [2024-12-09 15:41:28.356485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.320 [2024-12-09 15:41:28.356536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.320 [2024-12-09 15:41:28.356552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.320 [2024-12-09 15:41:28.356604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.320 [2024-12-09 15:41:28.356621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.320 #47 NEW cov: 12466 ft: 14740 corp: 31/2141b lim: 85 exec/s: 47 rss: 75Mb L: 84/84 MS: 1 InsertRepeatedBytes- 00:08:33.320 [2024-12-09 15:41:28.416596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.320 [2024-12-09 15:41:28.416624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.321 [2024-12-09 15:41:28.416687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.321 [2024-12-09 15:41:28.416703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.321 [2024-12-09 15:41:28.416758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.321 [2024-12-09 15:41:28.416773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.321 [2024-12-09 15:41:28.416827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.321 [2024-12-09 15:41:28.416849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.321 #48 NEW cov: 12466 ft: 14744 corp: 32/2214b lim: 85 exec/s: 48 rss: 75Mb L: 73/84 MS: 1 InsertByte- 00:08:33.321 [2024-12-09 15:41:28.456701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.321 [2024-12-09 15:41:28.456728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.321 [2024-12-09 15:41:28.456788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.321 [2024-12-09 15:41:28.456808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.321 [2024-12-09 15:41:28.456865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.321 [2024-12-09 15:41:28.456882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.321 [2024-12-09 15:41:28.456935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.321 [2024-12-09 15:41:28.456951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.321 #49 NEW cov: 12466 ft: 14758 corp: 33/2291b lim: 85 exec/s: 49 rss: 75Mb L: 77/84 MS: 1 PersAutoDict- DE: "<\001\000\000"- 00:08:33.321 [2024-12-09 15:41:28.516613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.321 [2024-12-09 15:41:28.516640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.321 [2024-12-09 15:41:28.516705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.321 [2024-12-09 15:41:28.516721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.581 #50 NEW cov: 12466 ft: 14774 corp: 34/2325b lim: 85 exec/s: 50 rss: 75Mb L: 34/84 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000H"- 00:08:33.581 [2024-12-09 15:41:28.577023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.581 [2024-12-09 15:41:28.577050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.581 [2024-12-09 15:41:28.577100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.581 [2024-12-09 15:41:28.577117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.581 [2024-12-09 15:41:28.577185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.581 [2024-12-09 15:41:28.577202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.581 [2024-12-09 15:41:28.577254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.581 [2024-12-09 15:41:28.577270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.581 #51 NEW cov: 12466 ft: 14806 corp: 35/2397b lim: 85 exec/s: 25 rss: 75Mb L: 72/84 MS: 1 ChangeByte- 00:08:33.581 #51 DONE cov: 12466 ft: 14806 corp: 35/2397b lim: 85 exec/s: 25 rss: 75Mb 00:08:33.581 ###### Recommended dictionary. ###### 00:08:33.581 "\000\000\000\000\000\000\000H" # Uses: 3 00:08:33.581 "<\001\000\000" # Uses: 1 00:08:33.581 ###### End of recommended dictionary. ###### 00:08:33.581 Done 51 runs in 2 second(s) 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:33.581 15:41:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:08:33.581 [2024-12-09 15:41:28.781952] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:33.581 [2024-12-09 15:41:28.782026] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid913124 ] 00:08:33.841 [2024-12-09 15:41:29.061333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.099 [2024-12-09 15:41:29.111593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.099 [2024-12-09 15:41:29.170928] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.099 [2024-12-09 15:41:29.187073] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:08:34.099 INFO: Running with entropic power schedule (0xFF, 100). 00:08:34.099 INFO: Seed: 3966088164 00:08:34.099 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:08:34.099 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:08:34.099 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:34.099 INFO: A corpus is not provided, starting from an empty corpus 00:08:34.099 #2 INITED exec/s: 0 rss: 66Mb 00:08:34.099 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:34.099 This may also happen if the target rejected all inputs we tried so far 00:08:34.099 [2024-12-09 15:41:29.242382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:34.099 [2024-12-09 15:41:29.242415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.359 NEW_FUNC[1/717]: 0x466658 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:08:34.359 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:34.359 #12 NEW cov: 12172 ft: 12169 corp: 2/10b lim: 25 exec/s: 0 rss: 73Mb L: 9/9 MS: 5 CrossOver-ChangeBit-CrossOver-CopyPart-InsertRepeatedBytes- 00:08:34.359 [2024-12-09 15:41:29.583571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:34.359 [2024-12-09 15:41:29.583635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.359 [2024-12-09 15:41:29.583719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:34.359 [2024-12-09 15:41:29.583755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.618 #18 NEW cov: 12285 ft: 13159 corp: 3/20b lim: 25 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 InsertByte- 00:08:34.618 [2024-12-09 15:41:29.653751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:34.618 [2024-12-09 15:41:29.653780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.618 [2024-12-09 15:41:29.653828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:34.618 [2024-12-09 15:41:29.653843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.618 [2024-12-09 15:41:29.653902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:34.618 [2024-12-09 15:41:29.653918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.618 [2024-12-09 15:41:29.653972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:34.618 [2024-12-09 15:41:29.653987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.618 #19 NEW cov: 12291 ft: 13882 corp: 4/40b lim: 25 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 CopyPart- 00:08:34.618 [2024-12-09 15:41:29.713531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:34.618 [2024-12-09 15:41:29.713559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.618 #20 NEW cov: 12376 ft: 14195 corp: 5/49b lim: 25 exec/s: 0 rss: 74Mb L: 9/20 MS: 1 ChangeByte- 00:08:34.618 [2024-12-09 15:41:29.754014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:34.618 [2024-12-09 15:41:29.754041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.618 [2024-12-09 15:41:29.754093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:34.618 [2024-12-09 15:41:29.754110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.618 [2024-12-09 15:41:29.754163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:34.618 [2024-12-09 15:41:29.754178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.618 [2024-12-09 15:41:29.754234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:34.618 [2024-12-09 15:41:29.754250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.618 #24 NEW cov: 12376 ft: 14313 corp: 6/73b lim: 25 exec/s: 0 rss: 74Mb L: 24/24 MS: 4 InsertByte-ChangeByte-InsertByte-InsertRepeatedBytes- 00:08:34.618 [2024-12-09 15:41:29.793892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:34.618 [2024-12-09 15:41:29.793918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.618 [2024-12-09 15:41:29.793975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:34.618 [2024-12-09 15:41:29.793992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.618 #25 NEW cov: 12376 ft: 14413 corp: 7/85b lim: 25 exec/s: 0 rss: 74Mb L: 12/24 MS: 1 CMP- DE: "\016\000"- 00:08:34.618 [2024-12-09 15:41:29.833859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:34.618 [2024-12-09 15:41:29.833886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.877 #26 NEW cov: 12376 ft: 14462 corp: 8/94b lim: 25 exec/s: 0 rss: 74Mb L: 9/24 MS: 1 ChangeBinInt- 00:08:34.877 [2024-12-09 15:41:29.894167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:34.877 [2024-12-09 15:41:29.894195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.877 [2024-12-09 15:41:29.894249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:34.877 [2024-12-09 15:41:29.894266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.877 #27 NEW cov: 12376 ft: 14577 corp: 9/104b lim: 25 exec/s: 0 rss: 74Mb L: 10/24 MS: 1 ChangeBinInt- 00:08:34.877 [2024-12-09 15:41:29.934261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:34.877 [2024-12-09 15:41:29.934288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.877 [2024-12-09 15:41:29.934338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:34.877 [2024-12-09 15:41:29.934353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.877 #28 NEW cov: 12376 ft: 14621 corp: 10/114b lim: 25 exec/s: 0 rss: 74Mb L: 10/24 MS: 1 ChangeBit- 00:08:34.877 [2024-12-09 15:41:29.994534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:34.877 [2024-12-09 15:41:29.994561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.877 [2024-12-09 15:41:29.994621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:34.877 [2024-12-09 15:41:29.994637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.877 [2024-12-09 15:41:29.994692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:34.877 [2024-12-09 15:41:29.994709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.877 #29 NEW cov: 12376 ft: 14901 corp: 11/133b lim: 25 exec/s: 0 rss: 74Mb L: 19/24 MS: 1 InsertRepeatedBytes- 00:08:34.877 [2024-12-09 15:41:30.054875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:34.877 [2024-12-09 15:41:30.054908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.877 [2024-12-09 15:41:30.054969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:34.877 [2024-12-09 15:41:30.054985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.877 [2024-12-09 15:41:30.055040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:34.877 [2024-12-09 15:41:30.055056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.877 [2024-12-09 15:41:30.055112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:34.877 [2024-12-09 15:41:30.055127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.877 #30 NEW cov: 12376 ft: 14954 corp: 12/156b lim: 25 exec/s: 0 rss: 74Mb L: 23/24 MS: 1 InsertRepeatedBytes- 00:08:34.877 [2024-12-09 15:41:30.094962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:34.877 [2024-12-09 15:41:30.094994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.877 [2024-12-09 15:41:30.095041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:34.877 [2024-12-09 15:41:30.095057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.877 [2024-12-09 15:41:30.095114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:34.877 [2024-12-09 15:41:30.095131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.877 [2024-12-09 15:41:30.095186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:34.877 [2024-12-09 15:41:30.095201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.136 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:35.136 #31 NEW cov: 12399 ft: 15068 corp: 13/176b lim: 25 exec/s: 0 rss: 74Mb L: 20/24 MS: 1 ChangeBinInt- 00:08:35.136 [2024-12-09 15:41:30.154808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.136 [2024-12-09 15:41:30.154839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.136 #32 NEW cov: 12399 ft: 15108 corp: 14/185b lim: 25 exec/s: 0 rss: 74Mb L: 9/24 MS: 1 ChangeBinInt- 00:08:35.136 [2024-12-09 15:41:30.195012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.136 [2024-12-09 15:41:30.195040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.136 [2024-12-09 15:41:30.195084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:35.136 [2024-12-09 15:41:30.195099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.136 #33 NEW cov: 12399 ft: 15124 corp: 15/195b lim: 25 exec/s: 33 rss: 74Mb L: 10/24 MS: 1 InsertByte- 00:08:35.137 [2024-12-09 15:41:30.255440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.137 [2024-12-09 15:41:30.255468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.137 [2024-12-09 15:41:30.255515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:35.137 [2024-12-09 15:41:30.255532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.137 [2024-12-09 15:41:30.255587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:35.137 [2024-12-09 15:41:30.255603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.137 [2024-12-09 15:41:30.255659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:35.137 [2024-12-09 15:41:30.255675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.137 #34 NEW cov: 12399 ft: 15166 corp: 16/217b lim: 25 exec/s: 34 rss: 74Mb L: 22/24 MS: 1 InsertRepeatedBytes- 00:08:35.137 [2024-12-09 15:41:30.295158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.137 [2024-12-09 15:41:30.295186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.137 #35 NEW cov: 12399 ft: 15216 corp: 17/225b lim: 25 exec/s: 35 rss: 74Mb L: 8/24 MS: 1 EraseBytes- 00:08:35.137 [2024-12-09 15:41:30.335381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.137 [2024-12-09 15:41:30.335410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.137 [2024-12-09 15:41:30.335475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:35.137 [2024-12-09 15:41:30.335491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.396 #36 NEW cov: 12399 ft: 15228 corp: 18/235b lim: 25 exec/s: 36 rss: 74Mb L: 10/24 MS: 1 ShuffleBytes- 00:08:35.396 [2024-12-09 15:41:30.395687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.396 [2024-12-09 15:41:30.395715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.396 [2024-12-09 15:41:30.395764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:35.396 [2024-12-09 15:41:30.395781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.396 [2024-12-09 15:41:30.395839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:35.396 [2024-12-09 15:41:30.395861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.396 #37 NEW cov: 12399 ft: 15239 corp: 19/252b lim: 25 exec/s: 37 rss: 74Mb L: 17/24 MS: 1 InsertRepeatedBytes- 00:08:35.396 [2024-12-09 15:41:30.435762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.396 [2024-12-09 15:41:30.435790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.396 [2024-12-09 15:41:30.435837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:35.396 [2024-12-09 15:41:30.435859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.396 [2024-12-09 15:41:30.435914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:35.396 [2024-12-09 15:41:30.435930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.396 #38 NEW cov: 12399 ft: 15255 corp: 20/270b lim: 25 exec/s: 38 rss: 74Mb L: 18/24 MS: 1 EraseBytes- 00:08:35.396 [2024-12-09 15:41:30.495737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.396 [2024-12-09 15:41:30.495766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.396 #39 NEW cov: 12399 ft: 15318 corp: 21/275b lim: 25 exec/s: 39 rss: 74Mb L: 5/24 MS: 1 EraseBytes- 00:08:35.396 [2024-12-09 15:41:30.535966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.396 [2024-12-09 15:41:30.535995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.396 [2024-12-09 15:41:30.536045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:35.396 [2024-12-09 15:41:30.536062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.396 #40 NEW cov: 12399 ft: 15319 corp: 22/286b lim: 25 exec/s: 40 rss: 74Mb L: 11/24 MS: 1 PersAutoDict- DE: "\016\000"- 00:08:35.396 [2024-12-09 15:41:30.575926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.396 [2024-12-09 15:41:30.575954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.396 #41 NEW cov: 12399 ft: 15332 corp: 23/292b lim: 25 exec/s: 41 rss: 74Mb L: 6/24 MS: 1 InsertByte- 00:08:35.655 [2024-12-09 15:41:30.636083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.655 [2024-12-09 15:41:30.636115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.655 #42 NEW cov: 12399 ft: 15336 corp: 24/300b lim: 25 exec/s: 42 rss: 74Mb L: 8/24 MS: 1 PersAutoDict- DE: "\016\000"- 00:08:35.655 [2024-12-09 15:41:30.696297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.655 [2024-12-09 15:41:30.696325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.655 #43 NEW cov: 12399 ft: 15347 corp: 25/309b lim: 25 exec/s: 43 rss: 74Mb L: 9/24 MS: 1 CMP- DE: "\325W\202@(\224R\000"- 00:08:35.655 [2024-12-09 15:41:30.736387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.655 [2024-12-09 15:41:30.736415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.655 #44 NEW cov: 12399 ft: 15413 corp: 26/318b lim: 25 exec/s: 44 rss: 74Mb L: 9/24 MS: 1 CopyPart- 00:08:35.655 [2024-12-09 15:41:30.776839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.655 [2024-12-09 15:41:30.776871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.655 [2024-12-09 15:41:30.776943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:35.655 [2024-12-09 15:41:30.776959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.655 [2024-12-09 15:41:30.777013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:35.655 [2024-12-09 15:41:30.777030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.655 [2024-12-09 15:41:30.777085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:35.655 [2024-12-09 15:41:30.777101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.655 #45 NEW cov: 12399 ft: 15425 corp: 27/338b lim: 25 exec/s: 45 rss: 74Mb L: 20/24 MS: 1 ChangeBit- 00:08:35.655 [2024-12-09 15:41:30.836693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.655 [2024-12-09 15:41:30.836720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.655 #46 NEW cov: 12399 ft: 15469 corp: 28/347b lim: 25 exec/s: 46 rss: 75Mb L: 9/24 MS: 1 ChangeBinInt- 00:08:35.655 [2024-12-09 15:41:30.876795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.655 [2024-12-09 15:41:30.876823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.914 #47 NEW cov: 12399 ft: 15487 corp: 29/356b lim: 25 exec/s: 47 rss: 75Mb L: 9/24 MS: 1 ChangeByte- 00:08:35.914 [2024-12-09 15:41:30.916887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.914 [2024-12-09 15:41:30.916916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.914 #48 NEW cov: 12399 ft: 15521 corp: 30/362b lim: 25 exec/s: 48 rss: 75Mb L: 6/24 MS: 1 EraseBytes- 00:08:35.914 [2024-12-09 15:41:30.977087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.914 [2024-12-09 15:41:30.977115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.914 #49 NEW cov: 12399 ft: 15532 corp: 31/368b lim: 25 exec/s: 49 rss: 75Mb L: 6/24 MS: 1 CopyPart- 00:08:35.914 [2024-12-09 15:41:31.017195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.914 [2024-12-09 15:41:31.017225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.914 #50 NEW cov: 12399 ft: 15557 corp: 32/375b lim: 25 exec/s: 50 rss: 75Mb L: 7/24 MS: 1 EraseBytes- 00:08:35.914 [2024-12-09 15:41:31.057441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.914 [2024-12-09 15:41:31.057468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.914 [2024-12-09 15:41:31.057524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:35.914 [2024-12-09 15:41:31.057541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.914 #51 NEW cov: 12399 ft: 15598 corp: 33/389b lim: 25 exec/s: 51 rss: 75Mb L: 14/24 MS: 1 CrossOver- 00:08:35.914 [2024-12-09 15:41:31.097456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.914 [2024-12-09 15:41:31.097483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.914 #52 NEW cov: 12399 ft: 15626 corp: 34/397b lim: 25 exec/s: 52 rss: 75Mb L: 8/24 MS: 1 ChangeBit- 00:08:36.175 [2024-12-09 15:41:31.157690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.175 [2024-12-09 15:41:31.157717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.175 [2024-12-09 15:41:31.157775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:36.175 [2024-12-09 15:41:31.157791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.175 #53 NEW cov: 12399 ft: 15634 corp: 35/407b lim: 25 exec/s: 53 rss: 75Mb L: 10/24 MS: 1 CopyPart- 00:08:36.175 [2024-12-09 15:41:31.197906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.175 [2024-12-09 15:41:31.197933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.175 [2024-12-09 15:41:31.197997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:36.175 [2024-12-09 15:41:31.198013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.175 [2024-12-09 15:41:31.198068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:36.175 [2024-12-09 15:41:31.198083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:36.175 #54 NEW cov: 12399 ft: 15637 corp: 36/424b lim: 25 exec/s: 27 rss: 75Mb L: 17/24 MS: 1 InsertRepeatedBytes- 00:08:36.175 #54 DONE cov: 12399 ft: 15637 corp: 36/424b lim: 25 exec/s: 27 rss: 75Mb 00:08:36.175 ###### Recommended dictionary. ###### 00:08:36.175 "\016\000" # Uses: 2 00:08:36.175 "\325W\202@(\224R\000" # Uses: 0 00:08:36.175 ###### End of recommended dictionary. ###### 00:08:36.175 Done 54 runs in 2 second(s) 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:36.175 15:41:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:08:36.175 [2024-12-09 15:41:31.395857] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:36.175 [2024-12-09 15:41:31.395930] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid913479 ] 00:08:36.743 [2024-12-09 15:41:31.671422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.743 [2024-12-09 15:41:31.721282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.743 [2024-12-09 15:41:31.780757] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.743 [2024-12-09 15:41:31.796906] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:08:36.743 INFO: Running with entropic power schedule (0xFF, 100). 00:08:36.743 INFO: Seed: 2281123115 00:08:36.743 INFO: Loaded 1 modules (390541 inline 8-bit counters): 390541 [0x2c82a4c, 0x2ce1fd9), 00:08:36.743 INFO: Loaded 1 PC tables (390541 PCs): 390541 [0x2ce1fe0,0x32d78b0), 00:08:36.743 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:36.743 INFO: A corpus is not provided, starting from an empty corpus 00:08:36.743 #2 INITED exec/s: 0 rss: 66Mb 00:08:36.743 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:36.743 This may also happen if the target rejected all inputs we tried so far 00:08:36.743 [2024-12-09 15:41:31.852557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.743 [2024-12-09 15:41:31.852588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.743 [2024-12-09 15:41:31.852626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.743 [2024-12-09 15:41:31.852640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.744 [2024-12-09 15:41:31.852696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.744 [2024-12-09 15:41:31.852712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.003 NEW_FUNC[1/714]: 0x467748 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:08:37.003 NEW_FUNC[2/714]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:37.003 #3 NEW cov: 12215 ft: 12243 corp: 2/73b lim: 100 exec/s: 0 rss: 73Mb L: 72/72 MS: 1 InsertRepeatedBytes- 00:08:37.003 [2024-12-09 15:41:32.183414] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.003 [2024-12-09 15:41:32.183453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.003 [2024-12-09 15:41:32.183509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.003 [2024-12-09 15:41:32.183524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.003 [2024-12-09 15:41:32.183579] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.003 [2024-12-09 15:41:32.183594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.262 NEW_FUNC[1/4]: 0x1fa97a8 in spdk_thread_is_exited /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:760 00:08:37.262 NEW_FUNC[2/4]: 0x1faa538 in spdk_thread_get_from_ctx /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:829 00:08:37.262 #4 NEW cov: 12357 ft: 12834 corp: 3/145b lim: 100 exec/s: 0 rss: 73Mb L: 72/72 MS: 1 ShuffleBytes- 00:08:37.262 [2024-12-09 15:41:32.243474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.262 [2024-12-09 15:41:32.243505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.262 [2024-12-09 15:41:32.243542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.262 [2024-12-09 15:41:32.243558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.262 [2024-12-09 15:41:32.243613] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.262 [2024-12-09 15:41:32.243629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.262 #5 NEW cov: 12363 ft: 12927 corp: 4/221b lim: 100 exec/s: 0 rss: 73Mb L: 76/76 MS: 1 CrossOver- 00:08:37.262 [2024-12-09 15:41:32.303640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.262 [2024-12-09 15:41:32.303668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.262 [2024-12-09 15:41:32.303725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.262 [2024-12-09 15:41:32.303739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.263 [2024-12-09 15:41:32.303794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.263 [2024-12-09 15:41:32.303809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.263 #6 NEW cov: 12448 ft: 13187 corp: 5/297b lim: 100 exec/s: 0 rss: 74Mb L: 76/76 MS: 1 ShuffleBytes- 00:08:37.263 [2024-12-09 15:41:32.363790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.263 [2024-12-09 15:41:32.363820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.263 [2024-12-09 15:41:32.363878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.263 [2024-12-09 15:41:32.363895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.263 [2024-12-09 15:41:32.363962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.263 [2024-12-09 15:41:32.363978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.263 #7 NEW cov: 12448 ft: 13355 corp: 6/372b lim: 100 exec/s: 0 rss: 74Mb L: 75/76 MS: 1 CopyPart- 00:08:37.263 [2024-12-09 15:41:32.403753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.263 [2024-12-09 15:41:32.403780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.263 [2024-12-09 15:41:32.403818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.263 [2024-12-09 15:41:32.403834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.263 #12 NEW cov: 12448 ft: 13790 corp: 7/427b lim: 100 exec/s: 0 rss: 74Mb L: 55/76 MS: 5 ShuffleBytes-ChangeByte-ShuffleBytes-ChangeASCIIInt-InsertRepeatedBytes- 00:08:37.263 [2024-12-09 15:41:32.444182] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.263 [2024-12-09 15:41:32.444209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.263 [2024-12-09 15:41:32.444257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.263 [2024-12-09 15:41:32.444273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.263 [2024-12-09 15:41:32.444325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.263 [2024-12-09 15:41:32.444357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.263 [2024-12-09 15:41:32.444412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.263 [2024-12-09 15:41:32.444428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:37.522 #13 NEW cov: 12448 ft: 14214 corp: 8/509b lim: 100 exec/s: 0 rss: 74Mb L: 82/82 MS: 1 CopyPart- 00:08:37.522 [2024-12-09 15:41:32.504045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.522 [2024-12-09 15:41:32.504072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.522 [2024-12-09 15:41:32.504110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.522 [2024-12-09 15:41:32.504126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.522 #14 NEW cov: 12448 ft: 14248 corp: 9/564b lim: 100 exec/s: 0 rss: 74Mb L: 55/82 MS: 1 ChangeBinInt- 00:08:37.522 [2024-12-09 15:41:32.564352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.522 [2024-12-09 15:41:32.564379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.522 [2024-12-09 15:41:32.564423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.522 [2024-12-09 15:41:32.564438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.522 [2024-12-09 15:41:32.564495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.522 [2024-12-09 15:41:32.564510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.522 #15 NEW cov: 12448 ft: 14280 corp: 10/640b lim: 100 exec/s: 0 rss: 74Mb L: 76/82 MS: 1 ChangeByte- 00:08:37.522 [2024-12-09 15:41:32.604749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.522 [2024-12-09 15:41:32.604777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.522 [2024-12-09 15:41:32.604830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.522 [2024-12-09 15:41:32.604849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.522 [2024-12-09 15:41:32.604921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446716478544674815 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.522 [2024-12-09 15:41:32.604937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.522 [2024-12-09 15:41:32.604991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073288476390 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.522 [2024-12-09 15:41:32.605007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:37.522 [2024-12-09 15:41:32.605065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.522 [2024-12-09 15:41:32.605081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:37.522 #16 NEW cov: 12448 ft: 14378 corp: 11/740b lim: 100 exec/s: 0 rss: 74Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:08:37.522 [2024-12-09 15:41:32.664518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.522 [2024-12-09 15:41:32.664545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.522 [2024-12-09 15:41:32.664586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.522 [2024-12-09 15:41:32.664602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.522 #17 NEW cov: 12448 ft: 14445 corp: 12/795b lim: 100 exec/s: 0 rss: 74Mb L: 55/100 MS: 1 ShuffleBytes- 00:08:37.522 [2024-12-09 15:41:32.704579] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:6293595036912670551 len:22338 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.523 [2024-12-09 15:41:32.704609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.523 [2024-12-09 15:41:32.704670] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.523 [2024-12-09 15:41:32.704686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.782 NEW_FUNC[1/1]: 0x1c54978 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:37.782 #18 NEW cov: 12471 ft: 14524 corp: 13/850b lim: 100 exec/s: 0 rss: 74Mb L: 55/100 MS: 1 ChangeByte- 00:08:37.782 [2024-12-09 15:41:32.764957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.782 [2024-12-09 15:41:32.764984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.782 [2024-12-09 15:41:32.765047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.782 [2024-12-09 15:41:32.765064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.782 [2024-12-09 15:41:32.765121] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.782 [2024-12-09 15:41:32.765137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.782 #19 NEW cov: 12471 ft: 14551 corp: 14/922b lim: 100 exec/s: 0 rss: 74Mb L: 72/100 MS: 1 CrossOver- 00:08:37.782 [2024-12-09 15:41:32.805170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.782 [2024-12-09 15:41:32.805196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.782 [2024-12-09 15:41:32.805263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.782 [2024-12-09 15:41:32.805280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.783 [2024-12-09 15:41:32.805335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.783 [2024-12-09 15:41:32.805350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.783 [2024-12-09 15:41:32.805407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.783 [2024-12-09 15:41:32.805421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:37.783 #20 NEW cov: 12471 ft: 14584 corp: 15/1009b lim: 100 exec/s: 0 rss: 74Mb L: 87/100 MS: 1 InsertRepeatedBytes- 00:08:37.783 [2024-12-09 15:41:32.845153] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.783 [2024-12-09 15:41:32.845180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.783 [2024-12-09 15:41:32.845244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.783 [2024-12-09 15:41:32.845262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.783 [2024-12-09 15:41:32.845320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.783 [2024-12-09 15:41:32.845337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.783 #21 NEW cov: 12471 ft: 14631 corp: 16/1085b lim: 100 exec/s: 21 rss: 74Mb L: 76/100 MS: 1 CrossOver- 00:08:37.783 [2024-12-09 15:41:32.905343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.783 [2024-12-09 15:41:32.905369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.783 [2024-12-09 15:41:32.905419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.783 [2024-12-09 15:41:32.905436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.783 [2024-12-09 15:41:32.905491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.783 [2024-12-09 15:41:32.905506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.783 #22 NEW cov: 12471 ft: 14651 corp: 17/1147b lim: 100 exec/s: 22 rss: 74Mb L: 62/100 MS: 1 EraseBytes- 00:08:37.783 [2024-12-09 15:41:32.945254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.783 [2024-12-09 15:41:32.945282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.783 [2024-12-09 15:41:32.945319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.783 [2024-12-09 15:41:32.945336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.783 #23 NEW cov: 12471 ft: 14706 corp: 18/1202b lim: 100 exec/s: 23 rss: 74Mb L: 55/100 MS: 1 ShuffleBytes- 00:08:37.783 [2024-12-09 15:41:33.005643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.783 [2024-12-09 15:41:33.005671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.783 [2024-12-09 15:41:33.005714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.783 [2024-12-09 15:41:33.005730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.783 [2024-12-09 15:41:33.005784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.783 [2024-12-09 15:41:33.005800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.042 #24 NEW cov: 12471 ft: 14718 corp: 19/1274b lim: 100 exec/s: 24 rss: 74Mb L: 72/100 MS: 1 ChangeBit- 00:08:38.042 [2024-12-09 15:41:33.045664] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.042 [2024-12-09 15:41:33.045691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.042 [2024-12-09 15:41:33.045738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.042 [2024-12-09 15:41:33.045758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.042 [2024-12-09 15:41:33.045812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.042 [2024-12-09 15:41:33.045827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.042 #25 NEW cov: 12471 ft: 14727 corp: 20/1350b lim: 100 exec/s: 25 rss: 74Mb L: 76/100 MS: 1 ChangeByte- 00:08:38.042 [2024-12-09 15:41:33.105885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.042 [2024-12-09 15:41:33.105912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.042 [2024-12-09 15:41:33.105960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.042 [2024-12-09 15:41:33.105976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.042 [2024-12-09 15:41:33.106029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.042 [2024-12-09 15:41:33.106043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.042 #26 NEW cov: 12471 ft: 14736 corp: 21/1422b lim: 100 exec/s: 26 rss: 74Mb L: 72/100 MS: 1 ChangeBit- 00:08:38.042 [2024-12-09 15:41:33.145663] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.042 [2024-12-09 15:41:33.145691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.042 #27 NEW cov: 12471 ft: 15520 corp: 22/1460b lim: 100 exec/s: 27 rss: 74Mb L: 38/100 MS: 1 EraseBytes- 00:08:38.042 [2024-12-09 15:41:33.206170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.042 [2024-12-09 15:41:33.206198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.042 [2024-12-09 15:41:33.206241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.042 [2024-12-09 15:41:33.206257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.042 [2024-12-09 15:41:33.206315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.042 [2024-12-09 15:41:33.206330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.042 #28 NEW cov: 12471 ft: 15534 corp: 23/1532b lim: 100 exec/s: 28 rss: 74Mb L: 72/100 MS: 1 ChangeBit- 00:08:38.042 [2024-12-09 15:41:33.266079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.042 [2024-12-09 15:41:33.266106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.302 #29 NEW cov: 12471 ft: 15547 corp: 24/1564b lim: 100 exec/s: 29 rss: 74Mb L: 32/100 MS: 1 EraseBytes- 00:08:38.302 [2024-12-09 15:41:33.306119] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.302 [2024-12-09 15:41:33.306147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.302 #30 NEW cov: 12471 ft: 15601 corp: 25/1596b lim: 100 exec/s: 30 rss: 74Mb L: 32/100 MS: 1 CopyPart- 00:08:38.302 [2024-12-09 15:41:33.366929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.302 [2024-12-09 15:41:33.366957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.302 [2024-12-09 15:41:33.367037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.302 [2024-12-09 15:41:33.367053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.302 [2024-12-09 15:41:33.367105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.302 [2024-12-09 15:41:33.367119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.302 [2024-12-09 15:41:33.367173] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.302 [2024-12-09 15:41:33.367189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.302 [2024-12-09 15:41:33.367245] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.302 [2024-12-09 15:41:33.367261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:38.302 #31 NEW cov: 12471 ft: 15652 corp: 26/1696b lim: 100 exec/s: 31 rss: 75Mb L: 100/100 MS: 1 CrossOver- 00:08:38.302 [2024-12-09 15:41:33.406405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:6293595036912670551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.302 [2024-12-09 15:41:33.406433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.302 #32 NEW cov: 12471 ft: 15686 corp: 27/1719b lim: 100 exec/s: 32 rss: 75Mb L: 23/100 MS: 1 EraseBytes- 00:08:38.302 [2024-12-09 15:41:33.467049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.302 [2024-12-09 15:41:33.467076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.302 [2024-12-09 15:41:33.467128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.302 [2024-12-09 15:41:33.467145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.302 [2024-12-09 15:41:33.467199] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.302 [2024-12-09 15:41:33.467213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.302 [2024-12-09 15:41:33.467269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.302 [2024-12-09 15:41:33.467285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.302 #33 NEW cov: 12471 ft: 15724 corp: 28/1812b lim: 100 exec/s: 33 rss: 75Mb L: 93/100 MS: 1 CopyPart- 00:08:38.302 [2024-12-09 15:41:33.507141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.302 [2024-12-09 15:41:33.507171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.302 [2024-12-09 15:41:33.507215] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.302 [2024-12-09 15:41:33.507230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.302 [2024-12-09 15:41:33.507283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.302 [2024-12-09 15:41:33.507299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.302 [2024-12-09 15:41:33.507353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.302 [2024-12-09 15:41:33.507369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.562 #34 NEW cov: 12471 ft: 15734 corp: 29/1899b lim: 100 exec/s: 34 rss: 75Mb L: 87/100 MS: 1 ShuffleBytes- 00:08:38.562 [2024-12-09 15:41:33.566875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.562 [2024-12-09 15:41:33.566905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.562 #35 NEW cov: 12471 ft: 15747 corp: 30/1937b lim: 100 exec/s: 35 rss: 75Mb L: 38/100 MS: 1 ChangeByte- 00:08:38.562 [2024-12-09 15:41:33.627315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.562 [2024-12-09 15:41:33.627343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.562 [2024-12-09 15:41:33.627406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.562 [2024-12-09 15:41:33.627423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.562 [2024-12-09 15:41:33.627476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551432 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.562 [2024-12-09 15:41:33.627490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.562 #36 NEW cov: 12471 ft: 15773 corp: 31/2009b lim: 100 exec/s: 36 rss: 75Mb L: 72/100 MS: 1 ChangeBinInt- 00:08:38.562 [2024-12-09 15:41:33.687498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.562 [2024-12-09 15:41:33.687525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.562 [2024-12-09 15:41:33.687578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.562 [2024-12-09 15:41:33.687594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.562 [2024-12-09 15:41:33.687650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.562 [2024-12-09 15:41:33.687665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.562 #37 NEW cov: 12471 ft: 15810 corp: 32/2081b lim: 100 exec/s: 37 rss: 75Mb L: 72/100 MS: 1 ChangeBinInt- 00:08:38.562 [2024-12-09 15:41:33.727987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65323 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.562 [2024-12-09 15:41:33.728014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.562 [2024-12-09 15:41:33.728072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.562 [2024-12-09 15:41:33.728087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.562 [2024-12-09 15:41:33.728142] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.562 [2024-12-09 15:41:33.728158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.562 [2024-12-09 15:41:33.728212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.562 [2024-12-09 15:41:33.728227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.562 [2024-12-09 15:41:33.728285] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.562 [2024-12-09 15:41:33.728301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:38.562 #38 NEW cov: 12471 ft: 15831 corp: 33/2181b lim: 100 exec/s: 38 rss: 75Mb L: 100/100 MS: 1 ChangeByte- 00:08:38.822 [2024-12-09 15:41:33.787655] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:6293595036912670551 len:22338 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.822 [2024-12-09 15:41:33.787682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.822 [2024-12-09 15:41:33.787722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:6293595036912670551 len:22360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.822 [2024-12-09 15:41:33.787738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.822 #39 NEW cov: 12471 ft: 15836 corp: 34/2236b lim: 100 exec/s: 39 rss: 75Mb L: 55/100 MS: 1 ChangeByte- 00:08:38.822 #39 DONE cov: 12471 ft: 15836 corp: 34/2236b lim: 100 exec/s: 19 rss: 75Mb 00:08:38.822 Done 39 runs in 2 second(s) 00:08:38.822 15:41:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:08:38.822 15:41:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:38.822 15:41:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:38.822 15:41:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:08:38.822 00:08:38.822 real 1m5.740s 00:08:38.822 user 1m40.226s 00:08:38.822 sys 0m9.019s 00:08:38.822 15:41:33 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.822 15:41:33 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:38.822 ************************************ 00:08:38.822 END TEST nvmf_llvm_fuzz 00:08:38.822 ************************************ 00:08:38.822 15:41:33 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:08:38.822 15:41:33 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:08:38.822 15:41:33 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:38.822 15:41:33 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.822 15:41:33 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.822 15:41:33 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:38.822 ************************************ 00:08:38.822 START TEST vfio_llvm_fuzz 00:08:38.822 ************************************ 00:08:38.822 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:39.084 * Looking for test storage... 00:08:39.084 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.084 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:39.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.084 --rc genhtml_branch_coverage=1 00:08:39.085 --rc genhtml_function_coverage=1 00:08:39.085 --rc genhtml_legend=1 00:08:39.085 --rc geninfo_all_blocks=1 00:08:39.085 --rc geninfo_unexecuted_blocks=1 00:08:39.085 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:39.085 ' 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:39.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.085 --rc genhtml_branch_coverage=1 00:08:39.085 --rc genhtml_function_coverage=1 00:08:39.085 --rc genhtml_legend=1 00:08:39.085 --rc geninfo_all_blocks=1 00:08:39.085 --rc geninfo_unexecuted_blocks=1 00:08:39.085 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:39.085 ' 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:39.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.085 --rc genhtml_branch_coverage=1 00:08:39.085 --rc genhtml_function_coverage=1 00:08:39.085 --rc genhtml_legend=1 00:08:39.085 --rc geninfo_all_blocks=1 00:08:39.085 --rc geninfo_unexecuted_blocks=1 00:08:39.085 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:39.085 ' 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:39.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.085 --rc genhtml_branch_coverage=1 00:08:39.085 --rc genhtml_function_coverage=1 00:08:39.085 --rc genhtml_legend=1 00:08:39.085 --rc geninfo_all_blocks=1 00:08:39.085 --rc geninfo_unexecuted_blocks=1 00:08:39.085 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:39.085 ' 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:08:39.085 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:39.086 #define SPDK_CONFIG_H 00:08:39.086 #define SPDK_CONFIG_AIO_FSDEV 1 00:08:39.086 #define SPDK_CONFIG_APPS 1 00:08:39.086 #define SPDK_CONFIG_ARCH native 00:08:39.086 #undef SPDK_CONFIG_ASAN 00:08:39.086 #undef SPDK_CONFIG_AVAHI 00:08:39.086 #undef SPDK_CONFIG_CET 00:08:39.086 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:08:39.086 #define SPDK_CONFIG_COVERAGE 1 00:08:39.086 #define SPDK_CONFIG_CROSS_PREFIX 00:08:39.086 #undef SPDK_CONFIG_CRYPTO 00:08:39.086 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:39.086 #undef SPDK_CONFIG_CUSTOMOCF 00:08:39.086 #undef SPDK_CONFIG_DAOS 00:08:39.086 #define SPDK_CONFIG_DAOS_DIR 00:08:39.086 #define SPDK_CONFIG_DEBUG 1 00:08:39.086 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:39.086 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:39.086 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:39.086 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:39.086 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:39.086 #undef SPDK_CONFIG_DPDK_UADK 00:08:39.086 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:39.086 #define SPDK_CONFIG_EXAMPLES 1 00:08:39.086 #undef SPDK_CONFIG_FC 00:08:39.086 #define SPDK_CONFIG_FC_PATH 00:08:39.086 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:39.086 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:39.086 #define SPDK_CONFIG_FSDEV 1 00:08:39.086 #undef SPDK_CONFIG_FUSE 00:08:39.086 #define SPDK_CONFIG_FUZZER 1 00:08:39.086 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:39.086 #undef SPDK_CONFIG_GOLANG 00:08:39.086 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:39.086 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:39.086 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:39.086 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:39.086 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:39.086 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:39.086 #undef SPDK_CONFIG_HAVE_LZ4 00:08:39.086 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:08:39.086 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:08:39.086 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:39.086 #define SPDK_CONFIG_IDXD 1 00:08:39.086 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:39.086 #undef SPDK_CONFIG_IPSEC_MB 00:08:39.086 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:39.086 #define SPDK_CONFIG_ISAL 1 00:08:39.086 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:39.086 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:39.086 #define SPDK_CONFIG_LIBDIR 00:08:39.086 #undef SPDK_CONFIG_LTO 00:08:39.086 #define SPDK_CONFIG_MAX_LCORES 128 00:08:39.086 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:08:39.086 #define SPDK_CONFIG_NVME_CUSE 1 00:08:39.086 #undef SPDK_CONFIG_OCF 00:08:39.086 #define SPDK_CONFIG_OCF_PATH 00:08:39.086 #define SPDK_CONFIG_OPENSSL_PATH 00:08:39.086 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:39.086 #define SPDK_CONFIG_PGO_DIR 00:08:39.086 #undef SPDK_CONFIG_PGO_USE 00:08:39.086 #define SPDK_CONFIG_PREFIX /usr/local 00:08:39.086 #undef SPDK_CONFIG_RAID5F 00:08:39.086 #undef SPDK_CONFIG_RBD 00:08:39.086 #define SPDK_CONFIG_RDMA 1 00:08:39.086 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:39.086 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:39.086 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:39.086 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:39.086 #undef SPDK_CONFIG_SHARED 00:08:39.086 #undef SPDK_CONFIG_SMA 00:08:39.086 #define SPDK_CONFIG_TESTS 1 00:08:39.086 #undef SPDK_CONFIG_TSAN 00:08:39.086 #define SPDK_CONFIG_UBLK 1 00:08:39.086 #define SPDK_CONFIG_UBSAN 1 00:08:39.086 #undef SPDK_CONFIG_UNIT_TESTS 00:08:39.086 #undef SPDK_CONFIG_URING 00:08:39.086 #define SPDK_CONFIG_URING_PATH 00:08:39.086 #undef SPDK_CONFIG_URING_ZNS 00:08:39.086 #undef SPDK_CONFIG_USDT 00:08:39.086 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:39.086 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:39.086 #define SPDK_CONFIG_VFIO_USER 1 00:08:39.086 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:39.086 #define SPDK_CONFIG_VHOST 1 00:08:39.086 #define SPDK_CONFIG_VIRTIO 1 00:08:39.086 #undef SPDK_CONFIG_VTUNE 00:08:39.086 #define SPDK_CONFIG_VTUNE_DIR 00:08:39.086 #define SPDK_CONFIG_WERROR 1 00:08:39.086 #define SPDK_CONFIG_WPDK_DIR 00:08:39.086 #undef SPDK_CONFIG_XNVME 00:08:39.086 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:39.086 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:39.087 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j72 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 913868 ]] 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 913868 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.UvTvzH 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.UvTvzH/tests/vfio /tmp/spdk.UvTvzH 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:08:39.088 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=86722052096 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=94500372480 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=7778320384 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=47245422592 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=47250186240 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=18894340096 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=18900074496 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5734400 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=47249788928 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=47250186240 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=397312 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=9450024960 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=9450037248 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:08:39.349 * Looking for test storage... 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=86722052096 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=9992912896 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:39.349 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1698 -- # set -o errtrace 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1703 -- # true 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # xtrace_fd 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:39.349 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:39.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.350 --rc genhtml_branch_coverage=1 00:08:39.350 --rc genhtml_function_coverage=1 00:08:39.350 --rc genhtml_legend=1 00:08:39.350 --rc geninfo_all_blocks=1 00:08:39.350 --rc geninfo_unexecuted_blocks=1 00:08:39.350 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:39.350 ' 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:39.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.350 --rc genhtml_branch_coverage=1 00:08:39.350 --rc genhtml_function_coverage=1 00:08:39.350 --rc genhtml_legend=1 00:08:39.350 --rc geninfo_all_blocks=1 00:08:39.350 --rc geninfo_unexecuted_blocks=1 00:08:39.350 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:39.350 ' 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:39.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.350 --rc genhtml_branch_coverage=1 00:08:39.350 --rc genhtml_function_coverage=1 00:08:39.350 --rc genhtml_legend=1 00:08:39.350 --rc geninfo_all_blocks=1 00:08:39.350 --rc geninfo_unexecuted_blocks=1 00:08:39.350 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:39.350 ' 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:39.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.350 --rc genhtml_branch_coverage=1 00:08:39.350 --rc genhtml_function_coverage=1 00:08:39.350 --rc genhtml_legend=1 00:08:39.350 --rc geninfo_all_blocks=1 00:08:39.350 --rc geninfo_unexecuted_blocks=1 00:08:39.350 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:39.350 ' 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:39.350 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:39.350 15:41:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:39.350 [2024-12-09 15:41:34.464807] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:39.350 [2024-12-09 15:41:34.464885] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid914010 ] 00:08:39.350 [2024-12-09 15:41:34.544823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.610 [2024-12-09 15:41:34.593692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.610 INFO: Running with entropic power schedule (0xFF, 100). 00:08:39.610 INFO: Seed: 951140731 00:08:39.610 INFO: Loaded 1 modules (387777 inline 8-bit counters): 387777 [0x2c4324c, 0x2ca1d0d), 00:08:39.610 INFO: Loaded 1 PC tables (387777 PCs): 387777 [0x2ca1d10,0x328c920), 00:08:39.610 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:39.610 INFO: A corpus is not provided, starting from an empty corpus 00:08:39.610 #2 INITED exec/s: 0 rss: 68Mb 00:08:39.610 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:39.610 This may also happen if the target rejected all inputs we tried so far 00:08:39.869 [2024-12-09 15:41:34.841132] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:08:40.127 NEW_FUNC[1/676]: 0x43b608 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:08:40.127 NEW_FUNC[2/676]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:40.127 #9 NEW cov: 11241 ft: 11208 corp: 2/7b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 2 InsertRepeatedBytes-InsertByte- 00:08:40.438 #10 NEW cov: 11264 ft: 15048 corp: 3/13b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:40.761 NEW_FUNC[1/1]: 0x1c20dc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:40.761 #11 NEW cov: 11281 ft: 16894 corp: 4/19b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:40.761 #17 NEW cov: 11281 ft: 17277 corp: 5/25b lim: 6 exec/s: 17 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:08:41.061 #19 NEW cov: 11288 ft: 17664 corp: 6/31b lim: 6 exec/s: 19 rss: 76Mb L: 6/6 MS: 2 ChangeBit-CrossOver- 00:08:41.320 #20 NEW cov: 11288 ft: 18047 corp: 7/37b lim: 6 exec/s: 20 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:08:41.320 #21 NEW cov: 11289 ft: 18410 corp: 8/43b lim: 6 exec/s: 21 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:08:41.579 #30 NEW cov: 11296 ft: 18452 corp: 9/49b lim: 6 exec/s: 30 rss: 76Mb L: 6/6 MS: 4 EraseBytes-ChangeByte-CrossOver-InsertByte- 00:08:41.839 #31 NEW cov: 11296 ft: 18514 corp: 10/55b lim: 6 exec/s: 15 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:08:41.839 #31 DONE cov: 11296 ft: 18514 corp: 10/55b lim: 6 exec/s: 15 rss: 76Mb 00:08:41.839 Done 31 runs in 2 second(s) 00:08:41.839 [2024-12-09 15:41:36.910050] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:42.098 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:42.098 15:41:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:42.098 [2024-12-09 15:41:37.177696] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:42.098 [2024-12-09 15:41:37.177766] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid914413 ] 00:08:42.098 [2024-12-09 15:41:37.257916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.098 [2024-12-09 15:41:37.302305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.358 INFO: Running with entropic power schedule (0xFF, 100). 00:08:42.358 INFO: Seed: 3661156555 00:08:42.358 INFO: Loaded 1 modules (387777 inline 8-bit counters): 387777 [0x2c4324c, 0x2ca1d0d), 00:08:42.358 INFO: Loaded 1 PC tables (387777 PCs): 387777 [0x2ca1d10,0x328c920), 00:08:42.358 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:42.358 INFO: A corpus is not provided, starting from an empty corpus 00:08:42.358 #2 INITED exec/s: 0 rss: 68Mb 00:08:42.358 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:42.358 This may also happen if the target rejected all inputs we tried so far 00:08:42.358 [2024-12-09 15:41:37.542387] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:08:42.617 [2024-12-09 15:41:37.585894] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:42.617 [2024-12-09 15:41:37.585919] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:42.617 [2024-12-09 15:41:37.585958] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:42.876 NEW_FUNC[1/678]: 0x43bba8 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:08:42.876 NEW_FUNC[2/678]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:42.876 #17 NEW cov: 11243 ft: 11173 corp: 2/5b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 5 CrossOver-CopyPart-CrossOver-CopyPart-InsertByte- 00:08:42.876 [2024-12-09 15:41:38.043510] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:42.876 [2024-12-09 15:41:38.043549] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:42.876 [2024-12-09 15:41:38.043567] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:43.135 #28 NEW cov: 11257 ft: 14432 corp: 3/9b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:43.135 [2024-12-09 15:41:38.222160] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:43.135 [2024-12-09 15:41:38.222185] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:43.135 [2024-12-09 15:41:38.222219] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:43.135 NEW_FUNC[1/1]: 0x1c20dc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:43.135 #29 NEW cov: 11274 ft: 15342 corp: 4/13b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 1 ChangeByte- 00:08:43.394 [2024-12-09 15:41:38.416639] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:43.394 [2024-12-09 15:41:38.416663] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:43.394 [2024-12-09 15:41:38.416696] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:43.394 #30 NEW cov: 11274 ft: 15625 corp: 5/17b lim: 4 exec/s: 30 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:08:43.394 [2024-12-09 15:41:38.601270] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:43.394 [2024-12-09 15:41:38.601294] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:43.394 [2024-12-09 15:41:38.601311] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:43.653 #31 NEW cov: 11274 ft: 15781 corp: 6/21b lim: 4 exec/s: 31 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:08:43.653 [2024-12-09 15:41:38.785399] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:43.653 [2024-12-09 15:41:38.785421] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:43.653 [2024-12-09 15:41:38.785439] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:43.912 #32 NEW cov: 11274 ft: 16150 corp: 7/25b lim: 4 exec/s: 32 rss: 76Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:43.912 [2024-12-09 15:41:38.970817] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:43.912 [2024-12-09 15:41:38.970851] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:43.912 [2024-12-09 15:41:38.970896] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:43.912 #33 NEW cov: 11274 ft: 16171 corp: 8/29b lim: 4 exec/s: 33 rss: 76Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:44.171 [2024-12-09 15:41:39.158468] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:44.171 [2024-12-09 15:41:39.158491] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:44.171 [2024-12-09 15:41:39.158524] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:44.171 #34 NEW cov: 11274 ft: 17159 corp: 9/33b lim: 4 exec/s: 34 rss: 76Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:44.171 [2024-12-09 15:41:39.350233] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:44.171 [2024-12-09 15:41:39.350254] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:44.171 [2024-12-09 15:41:39.350286] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:44.430 #35 NEW cov: 11281 ft: 17461 corp: 10/37b lim: 4 exec/s: 35 rss: 76Mb L: 4/4 MS: 1 CopyPart- 00:08:44.430 [2024-12-09 15:41:39.540407] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:44.430 [2024-12-09 15:41:39.540429] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:44.430 [2024-12-09 15:41:39.540447] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:44.689 #36 NEW cov: 11281 ft: 17780 corp: 11/41b lim: 4 exec/s: 18 rss: 77Mb L: 4/4 MS: 1 CrossOver- 00:08:44.689 #36 DONE cov: 11281 ft: 17780 corp: 11/41b lim: 4 exec/s: 18 rss: 77Mb 00:08:44.689 Done 36 runs in 2 second(s) 00:08:44.689 [2024-12-09 15:41:39.678070] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:08:44.689 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:08:44.689 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:44.689 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:44.949 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:44.949 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:44.949 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:44.949 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:44.949 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:44.949 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:44.949 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:44.949 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:44.949 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:44.949 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:44.949 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:44.949 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:44.949 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:44.949 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:44.949 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:44.949 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:44.949 15:41:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:44.949 [2024-12-09 15:41:39.940976] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:44.949 [2024-12-09 15:41:39.941032] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid914818 ] 00:08:44.949 [2024-12-09 15:41:40.022335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.949 [2024-12-09 15:41:40.073040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.208 INFO: Running with entropic power schedule (0xFF, 100). 00:08:45.208 INFO: Seed: 2142182697 00:08:45.208 INFO: Loaded 1 modules (387777 inline 8-bit counters): 387777 [0x2c4324c, 0x2ca1d0d), 00:08:45.208 INFO: Loaded 1 PC tables (387777 PCs): 387777 [0x2ca1d10,0x328c920), 00:08:45.208 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:45.208 INFO: A corpus is not provided, starting from an empty corpus 00:08:45.208 #2 INITED exec/s: 0 rss: 68Mb 00:08:45.208 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:45.208 This may also happen if the target rejected all inputs we tried so far 00:08:45.208 [2024-12-09 15:41:40.323417] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:08:45.208 [2024-12-09 15:41:40.399950] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:45.725 NEW_FUNC[1/676]: 0x43c598 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:08:45.725 NEW_FUNC[2/676]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:45.725 #14 NEW cov: 11221 ft: 11193 corp: 2/9b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:45.725 [2024-12-09 15:41:40.902239] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:46.027 #30 NEW cov: 11239 ft: 14554 corp: 3/17b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 ChangeBit- 00:08:46.027 [2024-12-09 15:41:41.105706] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:46.027 NEW_FUNC[1/1]: 0x1c20dc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:46.027 #36 NEW cov: 11256 ft: 15480 corp: 4/25b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:46.286 [2024-12-09 15:41:41.283205] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:46.286 #37 NEW cov: 11256 ft: 16859 corp: 5/33b lim: 8 exec/s: 37 rss: 76Mb L: 8/8 MS: 1 ChangeByte- 00:08:46.286 [2024-12-09 15:41:41.478587] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:46.544 #43 NEW cov: 11256 ft: 17497 corp: 6/41b lim: 8 exec/s: 43 rss: 76Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:46.544 [2024-12-09 15:41:41.680498] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:46.803 #44 NEW cov: 11256 ft: 17775 corp: 7/49b lim: 8 exec/s: 44 rss: 77Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:46.803 [2024-12-09 15:41:41.881665] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:46.803 #45 NEW cov: 11256 ft: 17921 corp: 8/57b lim: 8 exec/s: 45 rss: 77Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:47.061 [2024-12-09 15:41:42.059770] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:47.061 #46 NEW cov: 11263 ft: 18118 corp: 9/65b lim: 8 exec/s: 46 rss: 77Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:47.061 [2024-12-09 15:41:42.242740] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:47.320 #47 NEW cov: 11263 ft: 18385 corp: 10/73b lim: 8 exec/s: 23 rss: 77Mb L: 8/8 MS: 1 ChangeByte- 00:08:47.320 #47 DONE cov: 11263 ft: 18385 corp: 10/73b lim: 8 exec/s: 23 rss: 77Mb 00:08:47.320 Done 47 runs in 2 second(s) 00:08:47.320 [2024-12-09 15:41:42.375054] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:47.580 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:47.580 15:41:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:47.580 [2024-12-09 15:41:42.634892] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:47.580 [2024-12-09 15:41:42.634947] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid915173 ] 00:08:47.580 [2024-12-09 15:41:42.707041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.580 [2024-12-09 15:41:42.751219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.839 INFO: Running with entropic power schedule (0xFF, 100). 00:08:47.839 INFO: Seed: 525205993 00:08:47.839 INFO: Loaded 1 modules (387777 inline 8-bit counters): 387777 [0x2c4324c, 0x2ca1d0d), 00:08:47.839 INFO: Loaded 1 PC tables (387777 PCs): 387777 [0x2ca1d10,0x328c920), 00:08:47.839 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:47.839 INFO: A corpus is not provided, starting from an empty corpus 00:08:47.839 #2 INITED exec/s: 0 rss: 68Mb 00:08:47.839 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:47.839 This may also happen if the target rejected all inputs we tried so far 00:08:47.839 [2024-12-09 15:41:42.996529] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:08:48.357 NEW_FUNC[1/677]: 0x43cc88 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:08:48.357 NEW_FUNC[2/677]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:48.357 #197 NEW cov: 11235 ft: 11204 corp: 2/33b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 5 CMP-ChangeByte-CopyPart-InsertRepeatedBytes-InsertByte- DE: "\377\377\377\011"- 00:08:48.617 #198 NEW cov: 11249 ft: 14042 corp: 3/65b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:48.617 NEW_FUNC[1/1]: 0x1c20dc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:48.617 #199 NEW cov: 11266 ft: 15909 corp: 4/97b lim: 32 exec/s: 0 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:08:48.876 #201 NEW cov: 11266 ft: 16897 corp: 5/129b lim: 32 exec/s: 201 rss: 77Mb L: 32/32 MS: 2 EraseBytes-CopyPart- 00:08:49.134 #211 NEW cov: 11266 ft: 17433 corp: 6/161b lim: 32 exec/s: 211 rss: 77Mb L: 32/32 MS: 5 EraseBytes-CopyPart-InsertRepeatedBytes-InsertRepeatedBytes-InsertRepeatedBytes- 00:08:49.134 #212 NEW cov: 11266 ft: 17552 corp: 7/193b lim: 32 exec/s: 212 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:08:49.393 #218 NEW cov: 11266 ft: 17730 corp: 8/225b lim: 32 exec/s: 218 rss: 78Mb L: 32/32 MS: 1 CopyPart- 00:08:49.652 #219 NEW cov: 11266 ft: 17780 corp: 9/257b lim: 32 exec/s: 219 rss: 78Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:49.652 #225 NEW cov: 11273 ft: 17880 corp: 10/289b lim: 32 exec/s: 225 rss: 78Mb L: 32/32 MS: 1 CrossOver- 00:08:49.911 #226 NEW cov: 11273 ft: 18200 corp: 11/321b lim: 32 exec/s: 113 rss: 78Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:49.911 #226 DONE cov: 11273 ft: 18200 corp: 11/321b lim: 32 exec/s: 113 rss: 78Mb 00:08:49.911 ###### Recommended dictionary. ###### 00:08:49.911 "\377\377\377\011" # Uses: 2 00:08:49.911 ###### End of recommended dictionary. ###### 00:08:49.911 Done 226 runs in 2 second(s) 00:08:49.911 [2024-12-09 15:41:45.063054] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:08:50.169 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:08:50.169 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:50.169 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:50.169 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:50.169 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:50.169 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:50.169 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:50.169 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:50.169 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:50.169 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:50.169 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:50.169 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:50.169 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:50.170 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:50.170 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:50.170 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:50.170 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:50.170 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:50.170 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:50.170 15:41:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:50.170 [2024-12-09 15:41:45.328625] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:50.170 [2024-12-09 15:41:45.328694] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid915535 ] 00:08:50.429 [2024-12-09 15:41:45.408475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.429 [2024-12-09 15:41:45.452904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.429 INFO: Running with entropic power schedule (0xFF, 100). 00:08:50.429 INFO: Seed: 3221211255 00:08:50.688 INFO: Loaded 1 modules (387777 inline 8-bit counters): 387777 [0x2c4324c, 0x2ca1d0d), 00:08:50.688 INFO: Loaded 1 PC tables (387777 PCs): 387777 [0x2ca1d10,0x328c920), 00:08:50.688 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:50.688 INFO: A corpus is not provided, starting from an empty corpus 00:08:50.688 #2 INITED exec/s: 0 rss: 68Mb 00:08:50.688 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:50.688 This may also happen if the target rejected all inputs we tried so far 00:08:50.688 [2024-12-09 15:41:45.709230] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:08:50.948 NEW_FUNC[1/676]: 0x43d508 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:08:50.948 NEW_FUNC[2/676]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:50.948 #529 NEW cov: 11233 ft: 11199 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:51.207 #530 NEW cov: 11250 ft: 14784 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:08:51.466 NEW_FUNC[1/1]: 0x1c20dc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:51.466 #531 NEW cov: 11267 ft: 14917 corp: 4/97b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:51.725 #532 NEW cov: 11267 ft: 15598 corp: 5/129b lim: 32 exec/s: 532 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:08:51.984 #533 NEW cov: 11267 ft: 15719 corp: 6/161b lim: 32 exec/s: 533 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:08:51.984 #539 NEW cov: 11267 ft: 16110 corp: 7/193b lim: 32 exec/s: 539 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:52.243 #540 NEW cov: 11267 ft: 17232 corp: 8/225b lim: 32 exec/s: 540 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:08:52.503 #541 NEW cov: 11274 ft: 17647 corp: 9/257b lim: 32 exec/s: 541 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:52.762 #542 NEW cov: 11274 ft: 17806 corp: 10/289b lim: 32 exec/s: 271 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:52.762 #542 DONE cov: 11274 ft: 17806 corp: 10/289b lim: 32 exec/s: 271 rss: 76Mb 00:08:52.762 Done 542 runs in 2 second(s) 00:08:52.762 [2024-12-09 15:41:47.758059] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:08:52.762 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:08:52.762 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:52.762 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:52.762 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:52.762 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:52.762 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:52.762 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:52.762 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:52.762 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:52.762 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:53.022 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:53.022 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:53.022 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:53.022 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:53.022 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:53.022 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:53.022 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:53.022 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:53.022 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:53.022 15:41:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:53.022 [2024-12-09 15:41:48.008419] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:53.022 [2024-12-09 15:41:48.008473] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid915896 ] 00:08:53.022 [2024-12-09 15:41:48.087902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.022 [2024-12-09 15:41:48.132240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.281 INFO: Running with entropic power schedule (0xFF, 100). 00:08:53.281 INFO: Seed: 1604236837 00:08:53.281 INFO: Loaded 1 modules (387777 inline 8-bit counters): 387777 [0x2c4324c, 0x2ca1d0d), 00:08:53.282 INFO: Loaded 1 PC tables (387777 PCs): 387777 [0x2ca1d10,0x328c920), 00:08:53.282 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:53.282 INFO: A corpus is not provided, starting from an empty corpus 00:08:53.282 #2 INITED exec/s: 0 rss: 68Mb 00:08:53.282 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:53.282 This may also happen if the target rejected all inputs we tried so far 00:08:53.282 [2024-12-09 15:41:48.371081] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:08:53.282 [2024-12-09 15:41:48.422881] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:53.282 [2024-12-09 15:41:48.422916] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:53.801 NEW_FUNC[1/667]: 0x43df08 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:08:53.801 NEW_FUNC[2/667]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:53.801 #7 NEW cov: 10989 ft: 11211 corp: 2/14b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 5 ChangeByte-InsertRepeatedBytes-EraseBytes-CrossOver-CopyPart- 00:08:53.801 [2024-12-09 15:41:48.887016] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:53.801 [2024-12-09 15:41:48.887063] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:53.801 NEW_FUNC[1/11]: 0x443a38 in write_complete /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:353 00:08:53.801 NEW_FUNC[2/11]: 0x444978 in read_complete /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:324 00:08:53.801 #13 NEW cov: 11263 ft: 13932 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:54.061 [2024-12-09 15:41:49.057564] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:54.061 [2024-12-09 15:41:49.057600] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:54.061 NEW_FUNC[1/1]: 0x1c20dc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:54.061 #14 NEW cov: 11280 ft: 14717 corp: 4/40b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:08:54.061 [2024-12-09 15:41:49.231842] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:54.061 [2024-12-09 15:41:49.231895] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:54.320 #20 NEW cov: 11280 ft: 16099 corp: 5/53b lim: 13 exec/s: 20 rss: 76Mb L: 13/13 MS: 1 CrossOver- 00:08:54.320 [2024-12-09 15:41:49.418052] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:54.320 [2024-12-09 15:41:49.418083] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:54.320 #26 NEW cov: 11280 ft: 16889 corp: 6/66b lim: 13 exec/s: 26 rss: 77Mb L: 13/13 MS: 1 ChangeByte- 00:08:54.579 [2024-12-09 15:41:49.607133] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:54.579 [2024-12-09 15:41:49.607164] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:54.579 #27 NEW cov: 11280 ft: 17160 corp: 7/79b lim: 13 exec/s: 27 rss: 77Mb L: 13/13 MS: 1 ChangeBit- 00:08:54.579 [2024-12-09 15:41:49.782876] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:54.579 [2024-12-09 15:41:49.782905] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:54.838 #34 NEW cov: 11280 ft: 17315 corp: 8/92b lim: 13 exec/s: 34 rss: 77Mb L: 13/13 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:54.838 [2024-12-09 15:41:49.951043] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:54.838 [2024-12-09 15:41:49.951071] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:54.838 #40 NEW cov: 11280 ft: 17338 corp: 9/105b lim: 13 exec/s: 40 rss: 77Mb L: 13/13 MS: 1 ChangeByte- 00:08:55.097 [2024-12-09 15:41:50.119606] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:55.097 [2024-12-09 15:41:50.119644] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:55.097 #41 NEW cov: 11287 ft: 17741 corp: 10/118b lim: 13 exec/s: 41 rss: 77Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:55.097 [2024-12-09 15:41:50.287939] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:55.097 [2024-12-09 15:41:50.287971] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:55.356 #42 NEW cov: 11287 ft: 17844 corp: 11/131b lim: 13 exec/s: 21 rss: 77Mb L: 13/13 MS: 1 ChangeByte- 00:08:55.356 #42 DONE cov: 11287 ft: 17844 corp: 11/131b lim: 13 exec/s: 21 rss: 77Mb 00:08:55.356 Done 42 runs in 2 second(s) 00:08:55.356 [2024-12-09 15:41:50.410048] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:55.616 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:55.616 15:41:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:55.616 [2024-12-09 15:41:50.676730] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:55.616 [2024-12-09 15:41:50.676803] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid916256 ] 00:08:55.616 [2024-12-09 15:41:50.754967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.616 [2024-12-09 15:41:50.799314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.876 INFO: Running with entropic power schedule (0xFF, 100). 00:08:55.876 INFO: Seed: 4273240742 00:08:55.876 INFO: Loaded 1 modules (387777 inline 8-bit counters): 387777 [0x2c4324c, 0x2ca1d0d), 00:08:55.876 INFO: Loaded 1 PC tables (387777 PCs): 387777 [0x2ca1d10,0x328c920), 00:08:55.876 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:55.876 INFO: A corpus is not provided, starting from an empty corpus 00:08:55.876 #2 INITED exec/s: 0 rss: 68Mb 00:08:55.876 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:55.876 This may also happen if the target rejected all inputs we tried so far 00:08:55.876 [2024-12-09 15:41:51.040386] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:08:55.876 [2024-12-09 15:41:51.091880] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:55.876 [2024-12-09 15:41:51.091912] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:56.393 NEW_FUNC[1/676]: 0x43ebf8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:56.393 NEW_FUNC[2/676]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:56.393 #17 NEW cov: 11239 ft: 11208 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 5 ShuffleBytes-ShuffleBytes-CMP-CrossOver-CopyPart- DE: "\000\000\000\006"- 00:08:56.393 [2024-12-09 15:41:51.561292] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:56.393 [2024-12-09 15:41:51.561336] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:56.652 NEW_FUNC[1/2]: 0xf774b8 in spdk_get_ticks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:321 00:08:56.652 NEW_FUNC[2/2]: 0xf77528 in rte_get_timer_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/generic/rte_cycles.h:94 00:08:56.652 #23 NEW cov: 11255 ft: 14772 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 CMP- DE: "\377\377\377\377\003u\363 "- 00:08:56.652 [2024-12-09 15:41:51.754208] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:56.652 [2024-12-09 15:41:51.754247] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:56.652 NEW_FUNC[1/1]: 0x1c20dc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:56.652 #29 NEW cov: 11272 ft: 16055 corp: 4/28b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:56.911 [2024-12-09 15:41:51.937002] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:56.911 [2024-12-09 15:41:51.937035] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:56.911 #30 NEW cov: 11272 ft: 16632 corp: 5/37b lim: 9 exec/s: 30 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:08:56.911 [2024-12-09 15:41:52.114232] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:56.911 [2024-12-09 15:41:52.114264] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:57.170 #31 NEW cov: 11272 ft: 17007 corp: 6/46b lim: 9 exec/s: 31 rss: 76Mb L: 9/9 MS: 1 CrossOver- 00:08:57.170 [2024-12-09 15:41:52.292395] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:57.170 [2024-12-09 15:41:52.292425] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:57.429 #32 NEW cov: 11272 ft: 17211 corp: 7/55b lim: 9 exec/s: 32 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:08:57.429 [2024-12-09 15:41:52.469761] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:57.429 [2024-12-09 15:41:52.469791] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:57.429 #33 NEW cov: 11272 ft: 17259 corp: 8/64b lim: 9 exec/s: 33 rss: 76Mb L: 9/9 MS: 1 CrossOver- 00:08:57.429 [2024-12-09 15:41:52.653179] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:57.429 [2024-12-09 15:41:52.653208] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:57.688 #34 NEW cov: 11272 ft: 17326 corp: 9/73b lim: 9 exec/s: 34 rss: 76Mb L: 9/9 MS: 1 ChangeByte- 00:08:57.688 [2024-12-09 15:41:52.828537] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:57.688 [2024-12-09 15:41:52.828568] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:57.948 #35 NEW cov: 11279 ft: 17423 corp: 10/82b lim: 9 exec/s: 35 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:57.948 [2024-12-09 15:41:53.017197] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:57.948 [2024-12-09 15:41:53.017228] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:57.948 #36 NEW cov: 11279 ft: 17581 corp: 11/91b lim: 9 exec/s: 18 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:08:57.948 #36 DONE cov: 11279 ft: 17581 corp: 11/91b lim: 9 exec/s: 18 rss: 76Mb 00:08:57.948 ###### Recommended dictionary. ###### 00:08:57.948 "\000\000\000\006" # Uses: 0 00:08:57.948 "\377\377\377\377\003u\363 " # Uses: 0 00:08:57.948 ###### End of recommended dictionary. ###### 00:08:57.948 Done 36 runs in 2 second(s) 00:08:57.948 [2024-12-09 15:41:53.140063] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:08:58.207 15:41:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:08:58.207 15:41:53 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:58.207 15:41:53 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:58.207 15:41:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:08:58.207 00:08:58.207 real 0m19.347s 00:08:58.207 user 0m27.382s 00:08:58.207 sys 0m1.821s 00:08:58.207 15:41:53 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.207 15:41:53 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:58.207 ************************************ 00:08:58.207 END TEST vfio_llvm_fuzz 00:08:58.207 ************************************ 00:08:58.207 00:08:58.207 real 1m25.434s 00:08:58.207 user 2m7.778s 00:08:58.207 sys 0m11.048s 00:08:58.207 15:41:53 llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.207 15:41:53 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:58.207 ************************************ 00:08:58.207 END TEST llvm_fuzz 00:08:58.207 ************************************ 00:08:58.466 15:41:53 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:08:58.466 15:41:53 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:08:58.466 15:41:53 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:08:58.466 15:41:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:58.466 15:41:53 -- common/autotest_common.sh@10 -- # set +x 00:08:58.466 15:41:53 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:08:58.466 15:41:53 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:08:58.466 15:41:53 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:08:58.466 15:41:53 -- common/autotest_common.sh@10 -- # set +x 00:09:02.658 INFO: APP EXITING 00:09:02.658 INFO: killing all VMs 00:09:02.658 INFO: killing vhost app 00:09:02.658 INFO: EXIT DONE 00:09:04.566 Waiting for block devices as requested 00:09:04.566 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:09:04.566 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:04.566 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:04.566 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:04.566 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:04.825 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:04.825 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:04.825 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:04.825 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:05.084 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:05.084 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:05.084 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:05.343 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:05.343 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:05.343 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:05.602 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:05.602 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:08.893 Cleaning 00:09:08.893 Removing: /dev/shm/spdk_tgt_trace.pid894476 00:09:08.893 Removing: /var/run/dpdk/spdk_pid892046 00:09:08.893 Removing: /var/run/dpdk/spdk_pid893256 00:09:08.893 Removing: /var/run/dpdk/spdk_pid894476 00:09:08.893 Removing: /var/run/dpdk/spdk_pid894849 00:09:08.893 Removing: /var/run/dpdk/spdk_pid895654 00:09:08.893 Removing: /var/run/dpdk/spdk_pid895757 00:09:08.893 Removing: /var/run/dpdk/spdk_pid896520 00:09:08.893 Removing: /var/run/dpdk/spdk_pid896646 00:09:08.893 Removing: /var/run/dpdk/spdk_pid897026 00:09:08.893 Removing: /var/run/dpdk/spdk_pid897268 00:09:08.893 Removing: /var/run/dpdk/spdk_pid897501 00:09:08.893 Removing: /var/run/dpdk/spdk_pid897751 00:09:08.893 Removing: /var/run/dpdk/spdk_pid897928 00:09:08.893 Removing: /var/run/dpdk/spdk_pid898116 00:09:08.893 Removing: /var/run/dpdk/spdk_pid898272 00:09:08.893 Removing: /var/run/dpdk/spdk_pid898563 00:09:08.893 Removing: /var/run/dpdk/spdk_pid899210 00:09:08.893 Removing: /var/run/dpdk/spdk_pid901556 00:09:08.893 Removing: /var/run/dpdk/spdk_pid901758 00:09:08.893 Removing: /var/run/dpdk/spdk_pid901961 00:09:08.893 Removing: /var/run/dpdk/spdk_pid902080 00:09:08.893 Removing: /var/run/dpdk/spdk_pid902437 00:09:08.893 Removing: /var/run/dpdk/spdk_pid902526 00:09:08.893 Removing: /var/run/dpdk/spdk_pid902920 00:09:08.893 Removing: /var/run/dpdk/spdk_pid902942 00:09:08.893 Removing: /var/run/dpdk/spdk_pid903295 00:09:08.893 Removing: /var/run/dpdk/spdk_pid903301 00:09:08.893 Removing: /var/run/dpdk/spdk_pid903505 00:09:08.893 Removing: /var/run/dpdk/spdk_pid903510 00:09:08.893 Removing: /var/run/dpdk/spdk_pid903970 00:09:08.893 Removing: /var/run/dpdk/spdk_pid904163 00:09:08.893 Removing: /var/run/dpdk/spdk_pid904357 00:09:08.893 Removing: /var/run/dpdk/spdk_pid904532 00:09:08.893 Removing: /var/run/dpdk/spdk_pid905013 00:09:08.893 Removing: /var/run/dpdk/spdk_pid905375 00:09:08.893 Removing: /var/run/dpdk/spdk_pid905733 00:09:08.893 Removing: /var/run/dpdk/spdk_pid906090 00:09:08.893 Removing: /var/run/dpdk/spdk_pid906453 00:09:08.893 Removing: /var/run/dpdk/spdk_pid906806 00:09:08.893 Removing: /var/run/dpdk/spdk_pid907176 00:09:08.893 Removing: /var/run/dpdk/spdk_pid907529 00:09:08.893 Removing: /var/run/dpdk/spdk_pid907894 00:09:08.893 Removing: /var/run/dpdk/spdk_pid908247 00:09:08.893 Removing: /var/run/dpdk/spdk_pid908606 00:09:08.893 Removing: /var/run/dpdk/spdk_pid908962 00:09:08.893 Removing: /var/run/dpdk/spdk_pid909337 00:09:08.893 Removing: /var/run/dpdk/spdk_pid909674 00:09:08.893 Removing: /var/run/dpdk/spdk_pid910017 00:09:08.893 Removing: /var/run/dpdk/spdk_pid910359 00:09:08.893 Removing: /var/run/dpdk/spdk_pid910707 00:09:08.893 Removing: /var/run/dpdk/spdk_pid911082 00:09:08.893 Removing: /var/run/dpdk/spdk_pid911397 00:09:08.893 Removing: /var/run/dpdk/spdk_pid911721 00:09:08.893 Removing: /var/run/dpdk/spdk_pid912046 00:09:08.893 Removing: /var/run/dpdk/spdk_pid912404 00:09:08.893 Removing: /var/run/dpdk/spdk_pid912764 00:09:08.893 Removing: /var/run/dpdk/spdk_pid913124 00:09:08.893 Removing: /var/run/dpdk/spdk_pid913479 00:09:08.893 Removing: /var/run/dpdk/spdk_pid914010 00:09:08.893 Removing: /var/run/dpdk/spdk_pid914413 00:09:08.893 Removing: /var/run/dpdk/spdk_pid914818 00:09:08.893 Removing: /var/run/dpdk/spdk_pid915173 00:09:08.893 Removing: /var/run/dpdk/spdk_pid915535 00:09:08.893 Removing: /var/run/dpdk/spdk_pid915896 00:09:08.893 Removing: /var/run/dpdk/spdk_pid916256 00:09:08.893 Clean 00:09:08.893 15:42:03 -- common/autotest_common.sh@1453 -- # return 0 00:09:08.893 15:42:03 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:09:08.893 15:42:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:08.893 15:42:03 -- common/autotest_common.sh@10 -- # set +x 00:09:08.893 15:42:03 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:09:08.893 15:42:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:08.893 15:42:03 -- common/autotest_common.sh@10 -- # set +x 00:09:08.893 15:42:03 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:08.893 15:42:03 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:09:08.893 15:42:03 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:09:08.893 15:42:03 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:09:08.893 15:42:03 -- spdk/autotest.sh@398 -- # hostname 00:09:08.893 15:42:03 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-49 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:09:08.893 geninfo: WARNING: invalid characters removed from testname! 00:09:14.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:09:19.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:09:20.827 15:42:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:28.950 15:42:23 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:34.224 15:42:29 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:39.499 15:42:34 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:46.070 15:42:40 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:50.263 15:42:45 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:56.834 15:42:50 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:09:56.834 15:42:50 -- spdk/autorun.sh@1 -- $ timing_finish 00:09:56.834 15:42:50 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt ]] 00:09:56.834 15:42:50 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:09:56.834 15:42:50 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:09:56.834 15:42:50 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:56.834 + [[ -n 793479 ]] 00:09:56.834 + sudo kill 793479 00:09:56.845 [Pipeline] } 00:09:56.860 [Pipeline] // stage 00:09:56.865 [Pipeline] } 00:09:56.880 [Pipeline] // timeout 00:09:56.885 [Pipeline] } 00:09:56.899 [Pipeline] // catchError 00:09:56.904 [Pipeline] } 00:09:56.919 [Pipeline] // wrap 00:09:56.924 [Pipeline] } 00:09:56.937 [Pipeline] // catchError 00:09:56.946 [Pipeline] stage 00:09:56.948 [Pipeline] { (Epilogue) 00:09:56.961 [Pipeline] catchError 00:09:56.963 [Pipeline] { 00:09:56.977 [Pipeline] echo 00:09:56.979 Cleanup processes 00:09:56.985 [Pipeline] sh 00:09:57.275 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:57.275 922515 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:57.288 [Pipeline] sh 00:09:57.576 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:57.576 ++ grep -v 'sudo pgrep' 00:09:57.576 ++ awk '{print $1}' 00:09:57.576 + sudo kill -9 00:09:57.576 + true 00:09:57.588 [Pipeline] sh 00:09:57.875 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:10:10.102 [Pipeline] sh 00:10:10.387 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:10:10.387 Artifacts sizes are good 00:10:10.401 [Pipeline] archiveArtifacts 00:10:10.408 Archiving artifacts 00:10:10.574 [Pipeline] sh 00:10:10.937 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:10:10.978 [Pipeline] cleanWs 00:10:11.005 [WS-CLEANUP] Deleting project workspace... 00:10:11.005 [WS-CLEANUP] Deferred wipeout is used... 00:10:11.016 [WS-CLEANUP] done 00:10:11.018 [Pipeline] } 00:10:11.036 [Pipeline] // catchError 00:10:11.047 [Pipeline] sh 00:10:11.331 + logger -p user.info -t JENKINS-CI 00:10:11.340 [Pipeline] } 00:10:11.353 [Pipeline] // stage 00:10:11.358 [Pipeline] } 00:10:11.372 [Pipeline] // node 00:10:11.377 [Pipeline] End of Pipeline 00:10:11.418 Finished: SUCCESS